Feb 20 01:37:35 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 Feb 20 01:37:35 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Feb 20 01:37:35 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Feb 20 01:37:35 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 20 01:37:35 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 20 01:37:35 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 20 01:37:35 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 20 01:37:35 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 20 01:37:35 localhost kernel: signal: max sigframe size: 1776 Feb 20 01:37:35 localhost kernel: BIOS-provided physical RAM map: Feb 20 01:37:35 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 20 01:37:35 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 20 01:37:35 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 20 01:37:35 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Feb 20 01:37:35 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Feb 20 01:37:35 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 20 01:37:35 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 20 01:37:35 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable Feb 20 01:37:35 localhost kernel: NX (Execute Disable) protection: active Feb 20 01:37:35 localhost kernel: SMBIOS 2.8 present. Feb 20 01:37:35 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 20 01:37:35 localhost kernel: Hypervisor detected: KVM Feb 20 01:37:35 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 20 01:37:35 localhost kernel: kvm-clock: using sched offset of 2728798014 cycles Feb 20 01:37:35 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 20 01:37:35 localhost kernel: tsc: Detected 2799.998 MHz processor Feb 20 01:37:35 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000 Feb 20 01:37:35 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 20 01:37:35 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Feb 20 01:37:35 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] Feb 20 01:37:35 localhost kernel: Using GB pages for direct mapping Feb 20 01:37:35 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff] Feb 20 01:37:35 localhost kernel: ACPI: Early table checksum verification disabled Feb 20 01:37:35 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 20 01:37:35 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 20 01:37:35 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 20 01:37:35 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 20 01:37:35 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040 Feb 20 01:37:35 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 20 01:37:35 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 20 01:37:35 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] Feb 20 01:37:35 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] Feb 20 01:37:35 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] Feb 20 01:37:35 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] Feb 20 01:37:35 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] Feb 20 01:37:35 localhost kernel: No NUMA configuration found Feb 20 01:37:35 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff] Feb 20 01:37:35 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] Feb 20 01:37:35 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB) Feb 20 01:37:35 localhost kernel: Zone ranges: Feb 20 01:37:35 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 20 01:37:35 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 20 01:37:35 localhost kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Feb 20 01:37:35 localhost kernel: Device empty Feb 20 01:37:35 localhost kernel: Movable zone start for each node Feb 20 01:37:35 localhost kernel: Early memory node ranges Feb 20 01:37:35 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 20 01:37:35 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Feb 20 01:37:35 localhost kernel: node 0: [mem 0x0000000100000000-0x000000043fffffff] Feb 20 01:37:35 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] Feb 20 01:37:35 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 20 01:37:35 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 20 01:37:35 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Feb 20 01:37:35 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Feb 20 01:37:35 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 20 01:37:35 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 20 01:37:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 20 01:37:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 20 01:37:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 20 01:37:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 20 01:37:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 20 01:37:35 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 20 01:37:35 localhost kernel: TSC deadline timer available Feb 20 01:37:35 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs Feb 20 01:37:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Feb 20 01:37:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Feb 20 01:37:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Feb 20 01:37:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Feb 20 01:37:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Feb 20 01:37:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Feb 20 01:37:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Feb 20 01:37:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Feb 20 01:37:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Feb 20 01:37:35 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Feb 20 01:37:35 localhost kernel: Booting paravirtualized kernel on KVM Feb 20 01:37:35 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 20 01:37:35 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Feb 20 01:37:35 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Feb 20 01:37:35 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Feb 20 01:37:35 localhost kernel: Fallback order for Node 0: 0 Feb 20 01:37:35 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4128475 Feb 20 01:37:35 localhost kernel: Policy zone: Normal Feb 20 01:37:35 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Feb 20 01:37:35 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space. Feb 20 01:37:35 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 20 01:37:35 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 20 01:37:35 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 20 01:37:35 localhost kernel: software IO TLB: area num 8. Feb 20 01:37:35 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved) Feb 20 01:37:35 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Feb 20 01:37:35 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Feb 20 01:37:35 localhost kernel: ftrace: allocating 44803 entries in 176 pages Feb 20 01:37:35 localhost kernel: ftrace: allocated 176 pages with 3 groups Feb 20 01:37:35 localhost kernel: Dynamic Preempt: voluntary Feb 20 01:37:35 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Feb 20 01:37:35 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. Feb 20 01:37:35 localhost kernel: #011Trampoline variant of Tasks RCU enabled. Feb 20 01:37:35 localhost kernel: #011Rude variant of Tasks RCU enabled. Feb 20 01:37:35 localhost kernel: #011Tracing variant of Tasks RCU enabled. Feb 20 01:37:35 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 20 01:37:35 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Feb 20 01:37:35 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 Feb 20 01:37:35 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 20 01:37:35 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Feb 20 01:37:35 localhost kernel: random: crng init done (trusting CPU's manufacturer) Feb 20 01:37:35 localhost kernel: Console: colour VGA+ 80x25 Feb 20 01:37:35 localhost kernel: printk: console [tty0] enabled Feb 20 01:37:35 localhost kernel: printk: console [ttyS0] enabled Feb 20 01:37:35 localhost kernel: ACPI: Core revision 20211217 Feb 20 01:37:35 localhost kernel: APIC: Switch to symmetric I/O mode setup Feb 20 01:37:35 localhost kernel: x2apic enabled Feb 20 01:37:35 localhost kernel: Switched APIC routing to physical x2apic. Feb 20 01:37:35 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 20 01:37:35 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Feb 20 01:37:35 localhost kernel: pid_max: default: 32768 minimum: 301 Feb 20 01:37:35 localhost kernel: LSM: Security Framework initializing Feb 20 01:37:35 localhost kernel: Yama: becoming mindful. Feb 20 01:37:35 localhost kernel: SELinux: Initializing. Feb 20 01:37:35 localhost kernel: LSM support for eBPF active Feb 20 01:37:35 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 20 01:37:35 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 20 01:37:35 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 20 01:37:35 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 20 01:37:35 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 20 01:37:35 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 20 01:37:35 localhost kernel: Spectre V2 : Mitigation: Retpolines Feb 20 01:37:35 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 20 01:37:35 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 20 01:37:35 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 20 01:37:35 localhost kernel: RETBleed: Mitigation: untrained return thunk Feb 20 01:37:35 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 20 01:37:35 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 20 01:37:35 localhost kernel: Freeing SMP alternatives memory: 36K Feb 20 01:37:35 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 20 01:37:35 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Feb 20 01:37:35 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Feb 20 01:37:35 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Feb 20 01:37:35 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Feb 20 01:37:35 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 20 01:37:35 localhost kernel: ... version: 0 Feb 20 01:37:35 localhost kernel: ... bit width: 48 Feb 20 01:37:35 localhost kernel: ... generic registers: 6 Feb 20 01:37:35 localhost kernel: ... value mask: 0000ffffffffffff Feb 20 01:37:35 localhost kernel: ... max period: 00007fffffffffff Feb 20 01:37:35 localhost kernel: ... fixed-purpose events: 0 Feb 20 01:37:35 localhost kernel: ... event mask: 000000000000003f Feb 20 01:37:35 localhost kernel: rcu: Hierarchical SRCU implementation. Feb 20 01:37:35 localhost kernel: rcu: #011Max phase no-delay instances is 400. Feb 20 01:37:35 localhost kernel: smp: Bringing up secondary CPUs ... Feb 20 01:37:35 localhost kernel: x86: Booting SMP configuration: Feb 20 01:37:35 localhost kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Feb 20 01:37:35 localhost kernel: smp: Brought up 1 node, 8 CPUs Feb 20 01:37:35 localhost kernel: smpboot: Max logical packages: 8 Feb 20 01:37:35 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS) Feb 20 01:37:35 localhost kernel: node 0 deferred pages initialised in 24ms Feb 20 01:37:35 localhost kernel: devtmpfs: initialized Feb 20 01:37:35 localhost kernel: x86/mm: Memory block size: 128MB Feb 20 01:37:35 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 20 01:37:35 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Feb 20 01:37:35 localhost kernel: pinctrl core: initialized pinctrl subsystem Feb 20 01:37:35 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 20 01:37:35 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Feb 20 01:37:35 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 20 01:37:35 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 20 01:37:35 localhost kernel: audit: initializing netlink subsys (disabled) Feb 20 01:37:35 localhost kernel: audit: type=2000 audit(1771569453.947:1): state=initialized audit_enabled=0 res=1 Feb 20 01:37:35 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Feb 20 01:37:35 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 20 01:37:35 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Feb 20 01:37:35 localhost kernel: cpuidle: using governor menu Feb 20 01:37:35 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Feb 20 01:37:35 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 20 01:37:35 localhost kernel: PCI: Using configuration type 1 for base access Feb 20 01:37:35 localhost kernel: PCI: Using configuration type 1 for extended access Feb 20 01:37:35 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 20 01:37:35 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Feb 20 01:37:35 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 20 01:37:35 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 20 01:37:35 localhost kernel: cryptd: max_cpu_qlen set to 1000 Feb 20 01:37:35 localhost kernel: ACPI: Added _OSI(Module Device) Feb 20 01:37:35 localhost kernel: ACPI: Added _OSI(Processor Device) Feb 20 01:37:35 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 20 01:37:35 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 20 01:37:35 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 20 01:37:35 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 20 01:37:35 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 20 01:37:35 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 20 01:37:35 localhost kernel: ACPI: Interpreter enabled Feb 20 01:37:35 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5) Feb 20 01:37:35 localhost kernel: ACPI: Using IOAPIC for interrupt routing Feb 20 01:37:35 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 20 01:37:35 localhost kernel: PCI: Using E820 reservations for host bridge windows Feb 20 01:37:35 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 20 01:37:35 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 20 01:37:35 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Feb 20 01:37:35 localhost kernel: acpiphp: Slot [3] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [4] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [5] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [6] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [7] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [8] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [9] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [10] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [11] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [12] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [13] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [14] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [15] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [16] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [17] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [18] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [19] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [20] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [21] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [22] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [23] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [24] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [25] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [26] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [27] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [28] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [29] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [30] registered Feb 20 01:37:35 localhost kernel: acpiphp: Slot [31] registered Feb 20 01:37:35 localhost kernel: PCI host bridge to bus 0000:00 Feb 20 01:37:35 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 20 01:37:35 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 20 01:37:35 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 20 01:37:35 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 20 01:37:35 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window] Feb 20 01:37:35 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 20 01:37:35 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 20 01:37:35 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 20 01:37:35 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 20 01:37:35 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc140-0xc14f] Feb 20 01:37:35 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 20 01:37:35 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 20 01:37:35 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 20 01:37:35 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 20 01:37:35 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 20 01:37:35 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc100-0xc11f] Feb 20 01:37:35 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 20 01:37:35 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 20 01:37:35 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 20 01:37:35 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 20 01:37:35 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 20 01:37:35 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 20 01:37:35 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 20 01:37:35 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 20 01:37:35 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 20 01:37:35 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 20 01:37:35 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 20 01:37:35 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 20 01:37:35 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 20 01:37:35 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 20 01:37:35 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 20 01:37:35 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 20 01:37:35 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 20 01:37:35 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 20 01:37:35 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 20 01:37:35 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 20 01:37:35 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 20 01:37:35 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 20 01:37:35 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc120-0xc13f] Feb 20 01:37:35 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 20 01:37:35 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 20 01:37:35 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 20 01:37:35 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 20 01:37:35 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 20 01:37:35 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 20 01:37:35 localhost kernel: iommu: Default domain type: Translated Feb 20 01:37:35 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 20 01:37:35 localhost kernel: SCSI subsystem initialized Feb 20 01:37:35 localhost kernel: ACPI: bus type USB registered Feb 20 01:37:35 localhost kernel: usbcore: registered new interface driver usbfs Feb 20 01:37:35 localhost kernel: usbcore: registered new interface driver hub Feb 20 01:37:35 localhost kernel: usbcore: registered new device driver usb Feb 20 01:37:35 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Feb 20 01:37:35 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 20 01:37:35 localhost kernel: PTP clock support registered Feb 20 01:37:35 localhost kernel: EDAC MC: Ver: 3.0.0 Feb 20 01:37:35 localhost kernel: NetLabel: Initializing Feb 20 01:37:35 localhost kernel: NetLabel: domain hash size = 128 Feb 20 01:37:35 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Feb 20 01:37:35 localhost kernel: NetLabel: unlabeled traffic allowed by default Feb 20 01:37:35 localhost kernel: PCI: Using ACPI for IRQ routing Feb 20 01:37:35 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 20 01:37:35 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 20 01:37:35 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 20 01:37:35 localhost kernel: vgaarb: loaded Feb 20 01:37:35 localhost kernel: clocksource: Switched to clocksource kvm-clock Feb 20 01:37:35 localhost kernel: VFS: Disk quotas dquot_6.6.0 Feb 20 01:37:35 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 20 01:37:35 localhost kernel: pnp: PnP ACPI init Feb 20 01:37:35 localhost kernel: pnp: PnP ACPI: found 5 devices Feb 20 01:37:35 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 20 01:37:35 localhost kernel: NET: Registered PF_INET protocol family Feb 20 01:37:35 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 20 01:37:35 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Feb 20 01:37:35 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 20 01:37:35 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 20 01:37:35 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 20 01:37:35 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Feb 20 01:37:35 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear) Feb 20 01:37:35 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Feb 20 01:37:35 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Feb 20 01:37:35 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 20 01:37:35 localhost kernel: NET: Registered PF_XDP protocol family Feb 20 01:37:35 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 20 01:37:35 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 20 01:37:35 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 20 01:37:35 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Feb 20 01:37:35 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window] Feb 20 01:37:35 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 20 01:37:35 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 20 01:37:35 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 20 01:37:35 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 38317 usecs Feb 20 01:37:35 localhost kernel: PCI: CLS 0 bytes, default 64 Feb 20 01:37:35 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 20 01:37:35 localhost kernel: Trying to unpack rootfs image as initramfs... Feb 20 01:37:35 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) Feb 20 01:37:35 localhost kernel: ACPI: bus type thunderbolt registered Feb 20 01:37:35 localhost kernel: Initialise system trusted keyrings Feb 20 01:37:35 localhost kernel: Key type blacklist registered Feb 20 01:37:35 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Feb 20 01:37:35 localhost kernel: zbud: loaded Feb 20 01:37:35 localhost kernel: integrity: Platform Keyring initialized Feb 20 01:37:35 localhost kernel: NET: Registered PF_ALG protocol family Feb 20 01:37:35 localhost kernel: xor: automatically using best checksumming function avx Feb 20 01:37:35 localhost kernel: Key type asymmetric registered Feb 20 01:37:35 localhost kernel: Asymmetric key parser 'x509' registered Feb 20 01:37:35 localhost kernel: Running certificate verification selftests Feb 20 01:37:35 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Feb 20 01:37:35 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Feb 20 01:37:35 localhost kernel: io scheduler mq-deadline registered Feb 20 01:37:35 localhost kernel: io scheduler kyber registered Feb 20 01:37:35 localhost kernel: io scheduler bfq registered Feb 20 01:37:35 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Feb 20 01:37:35 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Feb 20 01:37:35 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Feb 20 01:37:35 localhost kernel: ACPI: button: Power Button [PWRF] Feb 20 01:37:35 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 20 01:37:35 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 20 01:37:35 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 20 01:37:35 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 20 01:37:35 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 20 01:37:35 localhost kernel: Non-volatile memory driver v1.3 Feb 20 01:37:35 localhost kernel: rdac: device handler registered Feb 20 01:37:35 localhost kernel: hp_sw: device handler registered Feb 20 01:37:35 localhost kernel: emc: device handler registered Feb 20 01:37:35 localhost kernel: alua: device handler registered Feb 20 01:37:35 localhost kernel: libphy: Fixed MDIO Bus: probed Feb 20 01:37:35 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 20 01:37:35 localhost kernel: ehci-pci: EHCI PCI platform driver Feb 20 01:37:35 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Feb 20 01:37:35 localhost kernel: ohci-pci: OHCI PCI platform driver Feb 20 01:37:35 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 20 01:37:35 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 20 01:37:35 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 20 01:37:35 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 20 01:37:35 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 Feb 20 01:37:35 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Feb 20 01:37:35 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Feb 20 01:37:35 localhost kernel: usb usb1: Product: UHCI Host Controller Feb 20 01:37:35 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd Feb 20 01:37:35 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Feb 20 01:37:35 localhost kernel: hub 1-0:1.0: USB hub found Feb 20 01:37:35 localhost kernel: hub 1-0:1.0: 2 ports detected Feb 20 01:37:35 localhost kernel: usbcore: registered new interface driver usbserial_generic Feb 20 01:37:35 localhost kernel: usbserial: USB Serial support registered for generic Feb 20 01:37:35 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 20 01:37:35 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 20 01:37:35 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 20 01:37:35 localhost kernel: mousedev: PS/2 mouse device common for all mice Feb 20 01:37:35 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 20 01:37:35 localhost kernel: rtc_cmos 00:04: registered as rtc0 Feb 20 01:37:35 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 20 01:37:35 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-02-20T06:37:34 UTC (1771569454) Feb 20 01:37:35 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 20 01:37:35 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Feb 20 01:37:35 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Feb 20 01:37:35 localhost kernel: usbcore: registered new interface driver usbhid Feb 20 01:37:35 localhost kernel: usbhid: USB HID core driver Feb 20 01:37:35 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Feb 20 01:37:35 localhost kernel: drop_monitor: Initializing network drop monitor service Feb 20 01:37:35 localhost kernel: Initializing XFRM netlink socket Feb 20 01:37:35 localhost kernel: NET: Registered PF_INET6 protocol family Feb 20 01:37:35 localhost kernel: Segment Routing with IPv6 Feb 20 01:37:35 localhost kernel: NET: Registered PF_PACKET protocol family Feb 20 01:37:35 localhost kernel: mpls_gso: MPLS GSO support Feb 20 01:37:35 localhost kernel: IPI shorthand broadcast: enabled Feb 20 01:37:35 localhost kernel: AVX2 version of gcm_enc/dec engaged. Feb 20 01:37:35 localhost kernel: AES CTR mode by8 optimization enabled Feb 20 01:37:35 localhost kernel: sched_clock: Marking stable (794853258, 186931867)->(1107076276, -125291151) Feb 20 01:37:35 localhost kernel: registered taskstats version 1 Feb 20 01:37:35 localhost kernel: Loading compiled-in X.509 certificates Feb 20 01:37:35 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Feb 20 01:37:35 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Feb 20 01:37:35 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Feb 20 01:37:35 localhost kernel: zswap: loaded using pool lzo/zbud Feb 20 01:37:35 localhost kernel: page_owner is disabled Feb 20 01:37:35 localhost kernel: Key type big_key registered Feb 20 01:37:35 localhost kernel: Freeing initrd memory: 74232K Feb 20 01:37:35 localhost kernel: Key type encrypted registered Feb 20 01:37:35 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Feb 20 01:37:35 localhost kernel: Loading compiled-in module X.509 certificates Feb 20 01:37:35 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Feb 20 01:37:35 localhost kernel: ima: Allocated hash algorithm: sha256 Feb 20 01:37:35 localhost kernel: ima: No architecture policies found Feb 20 01:37:35 localhost kernel: evm: Initialising EVM extended attributes: Feb 20 01:37:35 localhost kernel: evm: security.selinux Feb 20 01:37:35 localhost kernel: evm: security.SMACK64 (disabled) Feb 20 01:37:35 localhost kernel: evm: security.SMACK64EXEC (disabled) Feb 20 01:37:35 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Feb 20 01:37:35 localhost kernel: evm: security.SMACK64MMAP (disabled) Feb 20 01:37:35 localhost kernel: evm: security.apparmor (disabled) Feb 20 01:37:35 localhost kernel: evm: security.ima Feb 20 01:37:35 localhost kernel: evm: security.capability Feb 20 01:37:35 localhost kernel: evm: HMAC attrs: 0x1 Feb 20 01:37:35 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Feb 20 01:37:35 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 Feb 20 01:37:35 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 Feb 20 01:37:35 localhost kernel: usb 1-1: Product: QEMU USB Tablet Feb 20 01:37:35 localhost kernel: usb 1-1: Manufacturer: QEMU Feb 20 01:37:35 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1 Feb 20 01:37:35 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 Feb 20 01:37:35 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 Feb 20 01:37:35 localhost kernel: Freeing unused decrypted memory: 2036K Feb 20 01:37:35 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K Feb 20 01:37:35 localhost kernel: Write protecting the kernel read-only data: 26624k Feb 20 01:37:35 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 20 01:37:35 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Feb 20 01:37:35 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Feb 20 01:37:35 localhost kernel: Run /init as init process Feb 20 01:37:35 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 20 01:37:35 localhost systemd[1]: Detected virtualization kvm. Feb 20 01:37:35 localhost systemd[1]: Detected architecture x86-64. Feb 20 01:37:35 localhost systemd[1]: Running in initrd. Feb 20 01:37:35 localhost systemd[1]: No hostname configured, using default hostname. Feb 20 01:37:35 localhost systemd[1]: Hostname set to . Feb 20 01:37:35 localhost systemd[1]: Initializing machine ID from VM UUID. Feb 20 01:37:35 localhost systemd[1]: Queued start job for default target Initrd Default Target. Feb 20 01:37:35 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Feb 20 01:37:35 localhost systemd[1]: Reached target Local Encrypted Volumes. Feb 20 01:37:35 localhost systemd[1]: Reached target Initrd /usr File System. Feb 20 01:37:35 localhost systemd[1]: Reached target Local File Systems. Feb 20 01:37:35 localhost systemd[1]: Reached target Path Units. Feb 20 01:37:35 localhost systemd[1]: Reached target Slice Units. Feb 20 01:37:35 localhost systemd[1]: Reached target Swaps. Feb 20 01:37:35 localhost systemd[1]: Reached target Timer Units. Feb 20 01:37:35 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Feb 20 01:37:35 localhost systemd[1]: Listening on Journal Socket (/dev/log). Feb 20 01:37:35 localhost systemd[1]: Listening on Journal Socket. Feb 20 01:37:35 localhost systemd[1]: Listening on udev Control Socket. Feb 20 01:37:35 localhost systemd[1]: Listening on udev Kernel Socket. Feb 20 01:37:35 localhost systemd[1]: Reached target Socket Units. Feb 20 01:37:35 localhost systemd[1]: Starting Create List of Static Device Nodes... Feb 20 01:37:35 localhost systemd[1]: Starting Journal Service... Feb 20 01:37:35 localhost systemd[1]: Starting Load Kernel Modules... Feb 20 01:37:35 localhost systemd[1]: Starting Create System Users... Feb 20 01:37:35 localhost systemd[1]: Starting Setup Virtual Console... Feb 20 01:37:35 localhost systemd[1]: Finished Create List of Static Device Nodes. Feb 20 01:37:35 localhost systemd[1]: Finished Load Kernel Modules. Feb 20 01:37:35 localhost systemd-journald[284]: Journal started Feb 20 01:37:35 localhost systemd-journald[284]: Runtime Journal (/run/log/journal/61530aa3629540fa9f19edfd227b2bca) is 8.0M, max 314.7M, 306.7M free. Feb 20 01:37:35 localhost systemd-modules-load[285]: Module 'msr' is built in Feb 20 01:37:35 localhost systemd[1]: Started Journal Service. Feb 20 01:37:35 localhost systemd[1]: Finished Setup Virtual Console. Feb 20 01:37:35 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. Feb 20 01:37:35 localhost systemd[1]: Starting dracut cmdline hook... Feb 20 01:37:35 localhost systemd[1]: Starting Apply Kernel Variables... Feb 20 01:37:35 localhost systemd-sysusers[286]: Creating group 'sgx' with GID 997. Feb 20 01:37:35 localhost systemd-sysusers[286]: Creating group 'users' with GID 100. Feb 20 01:37:35 localhost systemd-sysusers[286]: Creating group 'dbus' with GID 81. Feb 20 01:37:35 localhost systemd-sysusers[286]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Feb 20 01:37:35 localhost systemd[1]: Finished Apply Kernel Variables. Feb 20 01:37:35 localhost systemd[1]: Finished Create System Users. Feb 20 01:37:35 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Feb 20 01:37:35 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 20 01:37:35 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Feb 20 01:37:35 localhost dracut-cmdline[289]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9 Feb 20 01:37:35 localhost systemd[1]: Finished Create Volatile Files and Directories. Feb 20 01:37:35 localhost dracut-cmdline[289]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Feb 20 01:37:35 localhost systemd[1]: Finished dracut cmdline hook. Feb 20 01:37:35 localhost systemd[1]: Starting dracut pre-udev hook... Feb 20 01:37:35 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 20 01:37:35 localhost kernel: device-mapper: uevent: version 1.0.3 Feb 20 01:37:35 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Feb 20 01:37:35 localhost kernel: RPC: Registered named UNIX socket transport module. Feb 20 01:37:35 localhost kernel: RPC: Registered udp transport module. Feb 20 01:37:35 localhost kernel: RPC: Registered tcp transport module. Feb 20 01:37:35 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 20 01:37:35 localhost rpc.statd[407]: Version 2.5.4 starting Feb 20 01:37:35 localhost rpc.statd[407]: Initializing NSM state Feb 20 01:37:35 localhost rpc.idmapd[412]: Setting log level to 0 Feb 20 01:37:35 localhost systemd[1]: Finished dracut pre-udev hook. Feb 20 01:37:35 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Feb 20 01:37:35 localhost systemd-udevd[425]: Using default interface naming scheme 'rhel-9.0'. Feb 20 01:37:35 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 20 01:37:35 localhost systemd[1]: Starting dracut pre-trigger hook... Feb 20 01:37:35 localhost systemd[1]: Finished dracut pre-trigger hook. Feb 20 01:37:35 localhost systemd[1]: Starting Coldplug All udev Devices... Feb 20 01:37:35 localhost systemd[1]: Finished Coldplug All udev Devices. Feb 20 01:37:35 localhost systemd[1]: Reached target System Initialization. Feb 20 01:37:35 localhost systemd[1]: Reached target Basic System. Feb 20 01:37:35 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Feb 20 01:37:35 localhost systemd[1]: Reached target Network. Feb 20 01:37:35 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Feb 20 01:37:35 localhost systemd[1]: Starting dracut initqueue hook... Feb 20 01:37:35 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB) Feb 20 01:37:36 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 20 01:37:36 localhost kernel: GPT:20971519 != 838860799 Feb 20 01:37:36 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Feb 20 01:37:36 localhost kernel: GPT:20971519 != 838860799 Feb 20 01:37:36 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Feb 20 01:37:36 localhost kernel: vda: vda1 vda2 vda3 vda4 Feb 20 01:37:36 localhost kernel: scsi host0: ata_piix Feb 20 01:37:36 localhost systemd-udevd[463]: Network interface NamePolicy= disabled on kernel command line. Feb 20 01:37:36 localhost kernel: scsi host1: ata_piix Feb 20 01:37:36 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 Feb 20 01:37:36 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 Feb 20 01:37:36 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Feb 20 01:37:36 localhost systemd[1]: Reached target Initrd Root Device. Feb 20 01:37:36 localhost kernel: ata1: found unknown device (class 0) Feb 20 01:37:36 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 20 01:37:36 localhost kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 20 01:37:36 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5 Feb 20 01:37:36 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 20 01:37:36 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 20 01:37:36 localhost systemd[1]: Finished dracut initqueue hook. Feb 20 01:37:36 localhost systemd[1]: Reached target Preparation for Remote File Systems. Feb 20 01:37:36 localhost systemd[1]: Reached target Remote Encrypted Volumes. Feb 20 01:37:36 localhost systemd[1]: Reached target Remote File Systems. Feb 20 01:37:36 localhost systemd[1]: Starting dracut pre-mount hook... Feb 20 01:37:36 localhost systemd[1]: Finished dracut pre-mount hook. Feb 20 01:37:36 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a... Feb 20 01:37:36 localhost systemd-fsck[512]: /usr/sbin/fsck.xfs: XFS file system. Feb 20 01:37:36 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Feb 20 01:37:36 localhost systemd[1]: Mounting /sysroot... Feb 20 01:37:36 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Feb 20 01:37:36 localhost kernel: XFS (vda4): Mounting V5 Filesystem Feb 20 01:37:36 localhost kernel: XFS (vda4): Ending clean mount Feb 20 01:37:36 localhost systemd[1]: Mounted /sysroot. Feb 20 01:37:36 localhost systemd[1]: Reached target Initrd Root File System. Feb 20 01:37:36 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Feb 20 01:37:36 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 20 01:37:36 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Feb 20 01:37:36 localhost systemd[1]: Reached target Initrd File Systems. Feb 20 01:37:36 localhost systemd[1]: Reached target Initrd Default Target. Feb 20 01:37:36 localhost systemd[1]: Starting dracut mount hook... Feb 20 01:37:36 localhost systemd[1]: Finished dracut mount hook. Feb 20 01:37:36 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Feb 20 01:37:36 localhost rpc.idmapd[412]: exiting on signal 15 Feb 20 01:37:36 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. Feb 20 01:37:36 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Feb 20 01:37:36 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Feb 20 01:37:36 localhost systemd[1]: Stopped target Network. Feb 20 01:37:36 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Feb 20 01:37:36 localhost systemd[1]: Stopped target Timer Units. Feb 20 01:37:36 localhost systemd[1]: dbus.socket: Deactivated successfully. Feb 20 01:37:36 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Feb 20 01:37:36 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 20 01:37:36 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Feb 20 01:37:36 localhost systemd[1]: Stopped target Initrd Default Target. Feb 20 01:37:36 localhost systemd[1]: Stopped target Basic System. Feb 20 01:37:36 localhost systemd[1]: Stopped target Initrd Root Device. Feb 20 01:37:36 localhost systemd[1]: Stopped target Initrd /usr File System. Feb 20 01:37:36 localhost systemd[1]: Stopped target Path Units. Feb 20 01:37:36 localhost systemd[1]: Stopped target Remote File Systems. Feb 20 01:37:36 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Feb 20 01:37:36 localhost systemd[1]: Stopped target Slice Units. Feb 20 01:37:36 localhost systemd[1]: Stopped target Socket Units. Feb 20 01:37:36 localhost systemd[1]: Stopped target System Initialization. Feb 20 01:37:36 localhost systemd[1]: Stopped target Local File Systems. Feb 20 01:37:36 localhost systemd[1]: Stopped target Swaps. Feb 20 01:37:36 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Feb 20 01:37:36 localhost systemd[1]: Stopped dracut mount hook. Feb 20 01:37:36 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 20 01:37:36 localhost systemd[1]: Stopped dracut pre-mount hook. Feb 20 01:37:36 localhost systemd[1]: Stopped target Local Encrypted Volumes. Feb 20 01:37:36 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 20 01:37:36 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 20 01:37:36 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 20 01:37:36 localhost systemd[1]: Stopped dracut initqueue hook. Feb 20 01:37:37 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 20 01:37:37 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Stopped Load Kernel Modules. Feb 20 01:37:37 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Stopped Create Volatile Files and Directories. Feb 20 01:37:37 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Stopped Coldplug All udev Devices. Feb 20 01:37:37 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Stopped dracut pre-trigger hook. Feb 20 01:37:37 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Feb 20 01:37:37 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Stopped Setup Virtual Console. Feb 20 01:37:37 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Feb 20 01:37:37 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Closed udev Control Socket. Feb 20 01:37:37 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Closed udev Kernel Socket. Feb 20 01:37:37 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Stopped dracut pre-udev hook. Feb 20 01:37:37 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Stopped dracut cmdline hook. Feb 20 01:37:37 localhost systemd[1]: Starting Cleanup udev Database... Feb 20 01:37:37 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 20 01:37:37 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Stopped Create List of Static Device Nodes. Feb 20 01:37:37 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Stopped Create System Users. Feb 20 01:37:37 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Feb 20 01:37:37 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 20 01:37:37 localhost systemd[1]: Finished Cleanup udev Database. Feb 20 01:37:37 localhost systemd[1]: Reached target Switch Root. Feb 20 01:37:37 localhost systemd[1]: Starting Switch Root... Feb 20 01:37:37 localhost systemd[1]: Switching root. Feb 20 01:37:37 localhost systemd-journald[284]: Journal stopped Feb 20 01:37:38 localhost systemd-journald[284]: Received SIGTERM from PID 1 (systemd). Feb 20 01:37:38 localhost kernel: audit: type=1404 audit(1771569457.223:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Feb 20 01:37:38 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 01:37:38 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 01:37:38 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 01:37:38 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 01:37:38 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 01:37:38 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 01:37:38 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 01:37:38 localhost kernel: audit: type=1403 audit(1771569457.339:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 20 01:37:38 localhost systemd[1]: Successfully loaded SELinux policy in 119.308ms. Feb 20 01:37:38 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.835ms. Feb 20 01:37:38 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 20 01:37:38 localhost systemd[1]: Detected virtualization kvm. Feb 20 01:37:38 localhost systemd[1]: Detected architecture x86-64. Feb 20 01:37:38 localhost systemd-rc-local-generator[582]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 01:37:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 01:37:38 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 20 01:37:38 localhost systemd[1]: Stopped Switch Root. Feb 20 01:37:38 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 20 01:37:38 localhost systemd[1]: Created slice Slice /system/getty. Feb 20 01:37:38 localhost systemd[1]: Created slice Slice /system/modprobe. Feb 20 01:37:38 localhost systemd[1]: Created slice Slice /system/serial-getty. Feb 20 01:37:38 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Feb 20 01:37:38 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Feb 20 01:37:38 localhost systemd[1]: Created slice User and Session Slice. Feb 20 01:37:38 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Feb 20 01:37:38 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Feb 20 01:37:38 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Feb 20 01:37:38 localhost systemd[1]: Reached target Local Encrypted Volumes. Feb 20 01:37:38 localhost systemd[1]: Stopped target Switch Root. Feb 20 01:37:38 localhost systemd[1]: Stopped target Initrd File Systems. Feb 20 01:37:38 localhost systemd[1]: Stopped target Initrd Root File System. Feb 20 01:37:38 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Feb 20 01:37:38 localhost systemd[1]: Reached target Path Units. Feb 20 01:37:38 localhost systemd[1]: Reached target rpc_pipefs.target. Feb 20 01:37:38 localhost systemd[1]: Reached target Slice Units. Feb 20 01:37:38 localhost systemd[1]: Reached target Swaps. Feb 20 01:37:38 localhost systemd[1]: Reached target Local Verity Protected Volumes. Feb 20 01:37:38 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Feb 20 01:37:38 localhost systemd[1]: Reached target RPC Port Mapper. Feb 20 01:37:38 localhost systemd[1]: Listening on Process Core Dump Socket. Feb 20 01:37:38 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Feb 20 01:37:38 localhost systemd[1]: Listening on udev Control Socket. Feb 20 01:37:38 localhost systemd[1]: Listening on udev Kernel Socket. Feb 20 01:37:38 localhost systemd[1]: Mounting Huge Pages File System... Feb 20 01:37:38 localhost systemd[1]: Mounting POSIX Message Queue File System... Feb 20 01:37:38 localhost systemd[1]: Mounting Kernel Debug File System... Feb 20 01:37:38 localhost systemd[1]: Mounting Kernel Trace File System... Feb 20 01:37:38 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Feb 20 01:37:38 localhost systemd[1]: Starting Create List of Static Device Nodes... Feb 20 01:37:38 localhost systemd[1]: Starting Load Kernel Module configfs... Feb 20 01:37:38 localhost systemd[1]: Starting Load Kernel Module drm... Feb 20 01:37:38 localhost systemd[1]: Starting Load Kernel Module fuse... Feb 20 01:37:38 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Feb 20 01:37:38 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 20 01:37:38 localhost systemd[1]: Stopped File System Check on Root Device. Feb 20 01:37:38 localhost systemd[1]: Stopped Journal Service. Feb 20 01:37:38 localhost kernel: fuse: init (API version 7.36) Feb 20 01:37:38 localhost systemd[1]: Starting Journal Service... Feb 20 01:37:38 localhost systemd[1]: Starting Load Kernel Modules... Feb 20 01:37:38 localhost systemd[1]: Starting Generate network units from Kernel command line... Feb 20 01:37:38 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Feb 20 01:37:38 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Feb 20 01:37:38 localhost systemd[1]: Starting Coldplug All udev Devices... Feb 20 01:37:38 localhost systemd-journald[618]: Journal started Feb 20 01:37:38 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/01f46965e72fd8a157841feaa66c8d52) is 8.0M, max 314.7M, 306.7M free. Feb 20 01:37:38 localhost systemd[1]: Queued start job for default target Multi-User System. Feb 20 01:37:38 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Feb 20 01:37:38 localhost systemd-modules-load[619]: Module 'msr' is built in Feb 20 01:37:38 localhost kernel: ACPI: bus type drm_connector registered Feb 20 01:37:38 localhost systemd[1]: Started Journal Service. Feb 20 01:37:38 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) Feb 20 01:37:38 localhost systemd[1]: Mounted Huge Pages File System. Feb 20 01:37:38 localhost systemd[1]: Mounted POSIX Message Queue File System. Feb 20 01:37:38 localhost systemd[1]: Mounted Kernel Debug File System. Feb 20 01:37:38 localhost systemd[1]: Mounted Kernel Trace File System. Feb 20 01:37:38 localhost systemd[1]: Finished Create List of Static Device Nodes. Feb 20 01:37:38 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 20 01:37:38 localhost systemd[1]: Finished Load Kernel Module configfs. Feb 20 01:37:38 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 20 01:37:38 localhost systemd[1]: Finished Load Kernel Module drm. Feb 20 01:37:38 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 20 01:37:38 localhost systemd[1]: Finished Load Kernel Module fuse. Feb 20 01:37:38 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network. Feb 20 01:37:38 localhost systemd[1]: Finished Load Kernel Modules. Feb 20 01:37:38 localhost systemd[1]: Finished Generate network units from Kernel command line. Feb 20 01:37:38 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Feb 20 01:37:38 localhost systemd[1]: Mounting FUSE Control File System... Feb 20 01:37:38 localhost systemd[1]: Mounting Kernel Configuration File System... Feb 20 01:37:38 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). Feb 20 01:37:38 localhost systemd[1]: Starting Rebuild Hardware Database... Feb 20 01:37:38 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Feb 20 01:37:38 localhost systemd[1]: Starting Load/Save Random Seed... Feb 20 01:37:38 localhost systemd[1]: Starting Apply Kernel Variables... Feb 20 01:37:38 localhost systemd[1]: Starting Create System Users... Feb 20 01:37:38 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/01f46965e72fd8a157841feaa66c8d52) is 8.0M, max 314.7M, 306.7M free. Feb 20 01:37:38 localhost systemd-journald[618]: Received client request to flush runtime journal. Feb 20 01:37:38 localhost systemd[1]: Mounted FUSE Control File System. Feb 20 01:37:38 localhost systemd[1]: Mounted Kernel Configuration File System. Feb 20 01:37:38 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Feb 20 01:37:38 localhost systemd[1]: Finished Apply Kernel Variables. Feb 20 01:37:38 localhost systemd[1]: Finished Load/Save Random Seed. Feb 20 01:37:38 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Feb 20 01:37:38 localhost systemd-sysusers[631]: Creating group 'sgx' with GID 989. Feb 20 01:37:38 localhost systemd-sysusers[631]: Creating group 'systemd-oom' with GID 988. Feb 20 01:37:38 localhost systemd-sysusers[631]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988. Feb 20 01:37:38 localhost systemd[1]: Finished Coldplug All udev Devices. Feb 20 01:37:38 localhost systemd[1]: Finished Create System Users. Feb 20 01:37:38 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Feb 20 01:37:38 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Feb 20 01:37:38 localhost systemd[1]: Reached target Preparation for Local File Systems. Feb 20 01:37:38 localhost systemd[1]: Set up automount EFI System Partition Automount. Feb 20 01:37:38 localhost systemd[1]: Finished Rebuild Hardware Database. Feb 20 01:37:38 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Feb 20 01:37:38 localhost systemd-udevd[635]: Using default interface naming scheme 'rhel-9.0'. Feb 20 01:37:38 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 20 01:37:38 localhost systemd[1]: Starting Load Kernel Module configfs... Feb 20 01:37:38 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 20 01:37:38 localhost systemd[1]: Finished Load Kernel Module configfs. Feb 20 01:37:38 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Feb 20 01:37:38 localhost systemd-udevd[636]: Network interface NamePolicy= disabled on kernel command line. Feb 20 01:37:38 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped. Feb 20 01:37:38 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped. Feb 20 01:37:38 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7... Feb 20 01:37:38 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 20 01:37:38 localhost systemd-fsck[678]: fsck.fat 4.2 (2021-01-31) Feb 20 01:37:38 localhost systemd-fsck[678]: /dev/vda2: 12 files, 1782/51145 clusters Feb 20 01:37:38 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6 Feb 20 01:37:38 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7. Feb 20 01:37:38 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 20 01:37:38 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 20 01:37:38 localhost kernel: SVM: TSC scaling supported Feb 20 01:37:38 localhost kernel: kvm: Nested Virtualization enabled Feb 20 01:37:38 localhost kernel: SVM: kvm: Nested Paging enabled Feb 20 01:37:38 localhost kernel: SVM: LBR virtualization supported Feb 20 01:37:38 localhost kernel: Console: switching to colour dummy device 80x25 Feb 20 01:37:38 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 20 01:37:38 localhost kernel: [drm] features: -context_init Feb 20 01:37:38 localhost kernel: [drm] number of scanouts: 1 Feb 20 01:37:38 localhost kernel: [drm] number of cap sets: 0 Feb 20 01:37:38 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0 Feb 20 01:37:38 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called Feb 20 01:37:38 localhost kernel: Console: switching to colour frame buffer device 128x48 Feb 20 01:37:38 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 20 01:37:39 localhost systemd[1]: Mounting /boot... Feb 20 01:37:39 localhost kernel: XFS (vda3): Mounting V5 Filesystem Feb 20 01:37:39 localhost kernel: XFS (vda3): Ending clean mount Feb 20 01:37:39 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) Feb 20 01:37:39 localhost systemd[1]: Mounted /boot. Feb 20 01:37:39 localhost systemd[1]: Mounting /boot/efi... Feb 20 01:37:39 localhost systemd[1]: Mounted /boot/efi. Feb 20 01:37:39 localhost systemd[1]: Reached target Local File Systems. Feb 20 01:37:39 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Feb 20 01:37:39 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Feb 20 01:37:39 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 20 01:37:39 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 20 01:37:39 localhost systemd[1]: Starting Automatic Boot Loader Update... Feb 20 01:37:39 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Feb 20 01:37:39 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 20 01:37:39 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 717 (bootctl) Feb 20 01:37:39 localhost systemd[1]: Starting File System Check on /dev/vda2... Feb 20 01:37:39 localhost systemd[1]: Finished File System Check on /dev/vda2. Feb 20 01:37:39 localhost systemd[1]: Mounting EFI System Partition Automount... Feb 20 01:37:39 localhost systemd[1]: Mounted EFI System Partition Automount. Feb 20 01:37:39 localhost systemd[1]: Finished Automatic Boot Loader Update. Feb 20 01:37:39 localhost systemd[1]: Finished Create Volatile Files and Directories. Feb 20 01:37:39 localhost systemd[1]: Starting Security Auditing Service... Feb 20 01:37:39 localhost systemd[1]: Starting RPC Bind... Feb 20 01:37:39 localhost systemd[1]: Starting Rebuild Journal Catalog... Feb 20 01:37:39 localhost auditd[726]: audit dispatcher initialized with q_depth=1200 and 1 active plugins Feb 20 01:37:39 localhost auditd[726]: Init complete, auditd 3.0.7 listening for events (startup state enable) Feb 20 01:37:39 localhost systemd[1]: Finished Rebuild Journal Catalog. Feb 20 01:37:39 localhost systemd[1]: Started RPC Bind. Feb 20 01:37:39 localhost augenrules[731]: /sbin/augenrules: No change Feb 20 01:37:39 localhost augenrules[741]: No rules Feb 20 01:37:39 localhost augenrules[741]: enabled 1 Feb 20 01:37:39 localhost augenrules[741]: failure 1 Feb 20 01:37:39 localhost augenrules[741]: pid 726 Feb 20 01:37:39 localhost augenrules[741]: rate_limit 0 Feb 20 01:37:39 localhost augenrules[741]: backlog_limit 8192 Feb 20 01:37:39 localhost augenrules[741]: lost 0 Feb 20 01:37:39 localhost augenrules[741]: backlog 4 Feb 20 01:37:39 localhost augenrules[741]: backlog_wait_time 60000 Feb 20 01:37:39 localhost augenrules[741]: backlog_wait_time_actual 0 Feb 20 01:37:39 localhost augenrules[741]: enabled 1 Feb 20 01:37:39 localhost augenrules[741]: failure 1 Feb 20 01:37:39 localhost augenrules[741]: pid 726 Feb 20 01:37:39 localhost augenrules[741]: rate_limit 0 Feb 20 01:37:39 localhost augenrules[741]: backlog_limit 8192 Feb 20 01:37:39 localhost augenrules[741]: lost 0 Feb 20 01:37:39 localhost augenrules[741]: backlog 4 Feb 20 01:37:39 localhost augenrules[741]: backlog_wait_time 60000 Feb 20 01:37:39 localhost augenrules[741]: backlog_wait_time_actual 0 Feb 20 01:37:39 localhost augenrules[741]: enabled 1 Feb 20 01:37:39 localhost augenrules[741]: failure 1 Feb 20 01:37:39 localhost augenrules[741]: pid 726 Feb 20 01:37:39 localhost augenrules[741]: rate_limit 0 Feb 20 01:37:39 localhost augenrules[741]: backlog_limit 8192 Feb 20 01:37:39 localhost augenrules[741]: lost 0 Feb 20 01:37:39 localhost augenrules[741]: backlog 4 Feb 20 01:37:39 localhost augenrules[741]: backlog_wait_time 60000 Feb 20 01:37:39 localhost augenrules[741]: backlog_wait_time_actual 0 Feb 20 01:37:39 localhost systemd[1]: Started Security Auditing Service. Feb 20 01:37:39 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Feb 20 01:37:39 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Feb 20 01:37:39 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Feb 20 01:37:39 localhost systemd[1]: Starting Update is Completed... Feb 20 01:37:39 localhost systemd[1]: Finished Update is Completed. Feb 20 01:37:39 localhost systemd[1]: Reached target System Initialization. Feb 20 01:37:39 localhost systemd[1]: Started dnf makecache --timer. Feb 20 01:37:39 localhost systemd[1]: Started Daily rotation of log files. Feb 20 01:37:39 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Feb 20 01:37:39 localhost systemd[1]: Reached target Timer Units. Feb 20 01:37:39 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Feb 20 01:37:39 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket. Feb 20 01:37:39 localhost systemd[1]: Reached target Socket Units. Feb 20 01:37:39 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... Feb 20 01:37:39 localhost systemd[1]: Starting D-Bus System Message Bus... Feb 20 01:37:39 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 20 01:37:39 localhost systemd[1]: Started D-Bus System Message Bus. Feb 20 01:37:39 localhost systemd[1]: Reached target Basic System. Feb 20 01:37:39 localhost journal[751]: Ready Feb 20 01:37:39 localhost systemd[1]: Starting NTP client/server... Feb 20 01:37:39 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Feb 20 01:37:39 localhost systemd[1]: Started irqbalance daemon. Feb 20 01:37:39 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Feb 20 01:37:39 localhost systemd[1]: Starting System Logging Service... Feb 20 01:37:39 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 01:37:39 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 01:37:39 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 01:37:39 localhost systemd[1]: Reached target sshd-keygen.target. Feb 20 01:37:39 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Feb 20 01:37:39 localhost systemd[1]: Reached target User and Group Name Lookups. Feb 20 01:37:39 localhost systemd[1]: Starting User Login Management... Feb 20 01:37:39 localhost rsyslogd[759]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="759" x-info="https://www.rsyslog.com"] start Feb 20 01:37:39 localhost rsyslogd[759]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ] Feb 20 01:37:39 localhost systemd[1]: Started System Logging Service. Feb 20 01:37:39 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Feb 20 01:37:39 localhost chronyd[766]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 20 01:37:39 localhost chronyd[766]: Using right/UTC timezone to obtain leap second data Feb 20 01:37:39 localhost chronyd[766]: Loaded seccomp filter (level 2) Feb 20 01:37:39 localhost systemd[1]: Started NTP client/server. Feb 20 01:37:39 localhost systemd-logind[760]: New seat seat0. Feb 20 01:37:39 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Feb 20 01:37:39 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 20 01:37:39 localhost systemd[1]: Started User Login Management. Feb 20 01:37:39 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 01:37:40 localhost cloud-init[770]: Cloud-init v. 22.1-9.el9 running 'init-local' at Fri, 20 Feb 2026 06:37:40 +0000. Up 6.43 seconds. Feb 20 01:37:40 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpk3n2dqg9.mount: Deactivated successfully. Feb 20 01:37:40 localhost systemd[1]: Starting Hostname Service... Feb 20 01:37:40 localhost systemd[1]: Started Hostname Service. Feb 20 01:37:40 localhost systemd-hostnamed[784]: Hostname set to (static) Feb 20 01:37:40 localhost systemd[1]: Finished Initial cloud-init job (pre-networking). Feb 20 01:37:40 localhost systemd[1]: Reached target Preparation for Network. Feb 20 01:37:40 localhost systemd[1]: Starting Network Manager... Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.7803] NetworkManager (version 1.42.2-1.el9) is starting... (boot:7afa3bbe-1ce0-42be-9364-f30e30933a2e) Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.7811] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Feb 20 01:37:40 localhost systemd[1]: Started Network Manager. Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.7860] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Feb 20 01:37:40 localhost systemd[1]: Reached target Network. Feb 20 01:37:40 localhost systemd[1]: Starting Network Manager Wait Online... Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.7949] manager[0x557e3200d020]: monitoring kernel firmware directory '/lib/firmware'. Feb 20 01:37:40 localhost systemd[1]: Starting GSSAPI Proxy Daemon... Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.7998] hostname: hostname: using hostnamed Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.7999] hostname: static hostname changed from (none) to "np0005625202.novalocal" Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8014] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Feb 20 01:37:40 localhost systemd[1]: Starting Enable periodic update of entitlement certificates.... Feb 20 01:37:40 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8161] manager[0x557e3200d020]: rfkill: Wi-Fi hardware radio set enabled Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8162] manager[0x557e3200d020]: rfkill: WWAN hardware radio set enabled Feb 20 01:37:40 localhost systemd[1]: Started Enable periodic update of entitlement certificates.. Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8251] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8252] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8266] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8268] manager: Networking is enabled by state file Feb 20 01:37:40 localhost systemd[1]: Started GSSAPI Proxy Daemon. Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8321] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8321] settings: Loaded settings plugin: keyfile (internal) Feb 20 01:37:40 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8373] dhcp: init: Using DHCP client 'internal' Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8377] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8399] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8408] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Feb 20 01:37:40 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8423] device (lo): Activation: starting connection 'lo' (e7a1daed-a466-41d4-a4a1-4e0a3e9eeee2) Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8437] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8443] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 20 01:37:40 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Feb 20 01:37:40 localhost systemd[1]: Reached target NFS client services. Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8497] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Feb 20 01:37:40 localhost systemd[1]: Reached target Preparation for Remote File Systems. Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8501] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8505] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8508] device (eth0): carrier: link connected Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8512] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8520] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Feb 20 01:37:40 localhost systemd[1]: Reached target Remote File Systems. Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8537] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8545] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8546] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8550] manager: NetworkManager state is now CONNECTING Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8553] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 20 01:37:40 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 20 01:37:40 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8640] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8644] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8652] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8663] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8669] device (lo): Activation: successful, device activated. Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8740] dhcp4 (eth0): state changed new lease, address=38.102.83.159 Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8744] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8769] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8790] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8792] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8796] manager: NetworkManager state is now CONNECTED_SITE Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8799] device (eth0): Activation: successful, device activated. Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8804] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 20 01:37:40 localhost NetworkManager[789]: [1771569460.8809] manager: startup complete Feb 20 01:37:40 localhost systemd[1]: Finished Network Manager Wait Online. Feb 20 01:37:40 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)... Feb 20 01:37:41 localhost cloud-init[919]: Cloud-init v. 22.1-9.el9 running 'init' at Fri, 20 Feb 2026 06:37:41 +0000. Up 7.37 seconds. Feb 20 01:37:41 localhost cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Feb 20 01:37:41 localhost cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Feb 20 01:37:41 localhost cloud-init[919]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Feb 20 01:37:41 localhost cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Feb 20 01:37:41 localhost cloud-init[919]: ci-info: | eth0 | True | 38.102.83.159 | 255.255.255.0 | global | fa:16:3e:3a:86:86 | Feb 20 01:37:41 localhost cloud-init[919]: ci-info: | eth0 | True | fe80::f816:3eff:fe3a:8686/64 | . | link | fa:16:3e:3a:86:86 | Feb 20 01:37:41 localhost cloud-init[919]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Feb 20 01:37:41 localhost cloud-init[919]: ci-info: | lo | True | ::1/128 | . | host | . | Feb 20 01:37:41 localhost cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Feb 20 01:37:41 localhost cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++ Feb 20 01:37:41 localhost cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Feb 20 01:37:41 localhost cloud-init[919]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Feb 20 01:37:41 localhost cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Feb 20 01:37:41 localhost cloud-init[919]: ci-info: | 0 | 0.0.0.0 | 38.102.83.1 | 0.0.0.0 | eth0 | UG | Feb 20 01:37:41 localhost cloud-init[919]: ci-info: | 1 | 38.102.83.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | Feb 20 01:37:41 localhost cloud-init[919]: ci-info: | 2 | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 | eth0 | UGH | Feb 20 01:37:41 localhost cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Feb 20 01:37:41 localhost cloud-init[919]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Feb 20 01:37:41 localhost cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+ Feb 20 01:37:41 localhost cloud-init[919]: ci-info: | Route | Destination | Gateway | Interface | Flags | Feb 20 01:37:41 localhost cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+ Feb 20 01:37:41 localhost cloud-init[919]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | Feb 20 01:37:41 localhost cloud-init[919]: ci-info: | 3 | multicast | :: | eth0 | U | Feb 20 01:37:41 localhost cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+ Feb 20 01:37:41 localhost systemd[1]: Starting Authorization Manager... Feb 20 01:37:41 localhost systemd[1]: Started Dynamic System Tuning Daemon. Feb 20 01:37:41 localhost polkitd[1036]: Started polkitd version 0.117 Feb 20 01:37:41 localhost systemd[1]: Started Authorization Manager. Feb 20 01:37:43 localhost cloud-init[919]: Generating public/private rsa key pair. Feb 20 01:37:43 localhost cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Feb 20 01:37:43 localhost cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Feb 20 01:37:43 localhost cloud-init[919]: The key fingerprint is: Feb 20 01:37:43 localhost cloud-init[919]: SHA256:jw7UyDVGKjR4dgBBoBScFFykqZyViosfRoY/l1ZYJ7I root@np0005625202.novalocal Feb 20 01:37:43 localhost cloud-init[919]: The key's randomart image is: Feb 20 01:37:43 localhost cloud-init[919]: +---[RSA 3072]----+ Feb 20 01:37:43 localhost cloud-init[919]: |=BX*+. . | Feb 20 01:37:43 localhost cloud-init[919]: |o++.+..o | Feb 20 01:37:43 localhost cloud-init[919]: |.o =o.+ = | Feb 20 01:37:43 localhost cloud-init[919]: |+.+ B B . | Feb 20 01:37:43 localhost cloud-init[919]: |++o E = S | Feb 20 01:37:43 localhost cloud-init[919]: |.= + o | Feb 20 01:37:43 localhost cloud-init[919]: |o = + . . . | Feb 20 01:37:43 localhost cloud-init[919]: | o = o | Feb 20 01:37:43 localhost cloud-init[919]: | . . | Feb 20 01:37:43 localhost cloud-init[919]: +----[SHA256]-----+ Feb 20 01:37:43 localhost cloud-init[919]: Generating public/private ecdsa key pair. Feb 20 01:37:43 localhost cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Feb 20 01:37:43 localhost cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Feb 20 01:37:43 localhost cloud-init[919]: The key fingerprint is: Feb 20 01:37:43 localhost cloud-init[919]: SHA256:hQfBV5QTWOOdXEiZrcyHVhDzGVDG+pzFQNABJ3/lIg8 root@np0005625202.novalocal Feb 20 01:37:43 localhost cloud-init[919]: The key's randomart image is: Feb 20 01:37:43 localhost cloud-init[919]: +---[ECDSA 256]---+ Feb 20 01:37:43 localhost cloud-init[919]: | .o. =XX%%o| Feb 20 01:37:43 localhost cloud-init[919]: | .oo.oBBB*| Feb 20 01:37:43 localhost cloud-init[919]: | ..o E+BB+| Feb 20 01:37:43 localhost cloud-init[919]: | o =*o+| Feb 20 01:37:43 localhost cloud-init[919]: | S .+.o| Feb 20 01:37:43 localhost cloud-init[919]: | + | Feb 20 01:37:43 localhost cloud-init[919]: | | Feb 20 01:37:43 localhost cloud-init[919]: | | Feb 20 01:37:43 localhost cloud-init[919]: | | Feb 20 01:37:43 localhost cloud-init[919]: +----[SHA256]-----+ Feb 20 01:37:43 localhost cloud-init[919]: Generating public/private ed25519 key pair. Feb 20 01:37:43 localhost cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Feb 20 01:37:43 localhost cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Feb 20 01:37:43 localhost cloud-init[919]: The key fingerprint is: Feb 20 01:37:43 localhost cloud-init[919]: SHA256:WBdKYUKYSN24YYkcfPK1jOCo2HMDhEg8X2IrmFWES8Y root@np0005625202.novalocal Feb 20 01:37:43 localhost cloud-init[919]: The key's randomart image is: Feb 20 01:37:43 localhost cloud-init[919]: +--[ED25519 256]--+ Feb 20 01:37:43 localhost cloud-init[919]: |+BoOoBo +.. | Feb 20 01:37:43 localhost cloud-init[919]: |ooEoX.o+ . . | Feb 20 01:37:43 localhost cloud-init[919]: |.O+B+* .o . | Feb 20 01:37:43 localhost cloud-init[919]: |+.+o+ oo . | Feb 20 01:37:43 localhost cloud-init[919]: |o... . S | Feb 20 01:37:43 localhost cloud-init[919]: |o o o | Feb 20 01:37:43 localhost cloud-init[919]: | o . | Feb 20 01:37:43 localhost cloud-init[919]: | | Feb 20 01:37:43 localhost cloud-init[919]: | | Feb 20 01:37:43 localhost cloud-init[919]: +----[SHA256]-----+ Feb 20 01:37:44 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler). Feb 20 01:37:44 localhost systemd[1]: Reached target Cloud-config availability. Feb 20 01:37:44 localhost systemd[1]: Reached target Network is Online. Feb 20 01:37:44 localhost systemd[1]: Starting Apply the settings specified in cloud-config... Feb 20 01:37:44 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot). Feb 20 01:37:44 localhost systemd[1]: Starting Crash recovery kernel arming... Feb 20 01:37:44 localhost systemd[1]: Starting Notify NFS peers of a restart... Feb 20 01:37:44 localhost systemd[1]: Starting OpenSSH server daemon... Feb 20 01:37:44 localhost systemd[1]: Starting Permit User Sessions... Feb 20 01:37:44 localhost sm-notify[1132]: Version 2.5.4 starting Feb 20 01:37:44 localhost systemd[1]: Started Notify NFS peers of a restart. Feb 20 01:37:44 localhost systemd[1]: Finished Permit User Sessions. Feb 20 01:37:44 localhost systemd[1]: Started Command Scheduler. Feb 20 01:37:44 localhost systemd[1]: Started Getty on tty1. Feb 20 01:37:44 localhost systemd[1]: Started Serial Getty on ttyS0. Feb 20 01:37:44 localhost sshd[1133]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:37:44 localhost systemd[1]: Reached target Login Prompts. Feb 20 01:37:44 localhost systemd[1]: Started OpenSSH server daemon. Feb 20 01:37:44 localhost systemd[1]: Reached target Multi-User System. Feb 20 01:37:44 localhost systemd[1]: Starting Record Runlevel Change in UTMP... Feb 20 01:37:44 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 20 01:37:44 localhost systemd[1]: Finished Record Runlevel Change in UTMP. Feb 20 01:37:44 localhost sshd[1144]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:37:44 localhost sshd[1160]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:37:44 localhost sshd[1172]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:37:44 localhost sshd[1182]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:37:44 localhost kdumpctl[1137]: kdump: No kdump initial ramdisk found. Feb 20 01:37:44 localhost kdumpctl[1137]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img Feb 20 01:37:44 localhost sshd[1195]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:37:44 localhost sshd[1210]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:37:44 localhost sshd[1232]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:37:44 localhost sshd[1258]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:37:44 localhost cloud-init[1267]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Fri, 20 Feb 2026 06:37:44 +0000. Up 10.55 seconds. Feb 20 01:37:44 localhost sshd[1274]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:37:44 localhost systemd[1]: Finished Apply the settings specified in cloud-config. Feb 20 01:37:44 localhost systemd[1]: Starting Execute cloud user/final scripts... Feb 20 01:37:44 localhost cloud-init[1436]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Fri, 20 Feb 2026 06:37:44 +0000. Up 10.89 seconds. Feb 20 01:37:44 localhost dracut[1438]: dracut-057-21.git20230214.el9 Feb 20 01:37:44 localhost cloud-init[1455]: ############################################################# Feb 20 01:37:44 localhost cloud-init[1456]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Feb 20 01:37:44 localhost cloud-init[1458]: 256 SHA256:hQfBV5QTWOOdXEiZrcyHVhDzGVDG+pzFQNABJ3/lIg8 root@np0005625202.novalocal (ECDSA) Feb 20 01:37:44 localhost cloud-init[1460]: 256 SHA256:WBdKYUKYSN24YYkcfPK1jOCo2HMDhEg8X2IrmFWES8Y root@np0005625202.novalocal (ED25519) Feb 20 01:37:44 localhost cloud-init[1462]: 3072 SHA256:jw7UyDVGKjR4dgBBoBScFFykqZyViosfRoY/l1ZYJ7I root@np0005625202.novalocal (RSA) Feb 20 01:37:44 localhost cloud-init[1463]: -----END SSH HOST KEY FINGERPRINTS----- Feb 20 01:37:44 localhost cloud-init[1464]: ############################################################# Feb 20 01:37:44 localhost dracut[1440]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64 Feb 20 01:37:44 localhost cloud-init[1436]: Cloud-init v. 22.1-9.el9 finished at Fri, 20 Feb 2026 06:37:44 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 11.13 seconds Feb 20 01:37:44 localhost systemd[1]: Reloading Network Manager... Feb 20 01:37:45 localhost NetworkManager[789]: [1771569465.0011] audit: op="reload" arg="0" pid=1547 uid=0 result="success" Feb 20 01:37:45 localhost NetworkManager[789]: [1771569465.0023] config: signal: SIGHUP (no changes from disk) Feb 20 01:37:45 localhost systemd[1]: Reloaded Network Manager. Feb 20 01:37:45 localhost systemd[1]: Finished Execute cloud user/final scripts. Feb 20 01:37:45 localhost systemd[1]: Reached target Cloud-init target. Feb 20 01:37:45 localhost dracut[1440]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Feb 20 01:37:45 localhost dracut[1440]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Feb 20 01:37:45 localhost dracut[1440]: memstrack is not available Feb 20 01:37:45 localhost dracut[1440]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Feb 20 01:37:45 localhost dracut[1440]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Feb 20 01:37:45 localhost dracut[1440]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Feb 20 01:37:45 localhost chronyd[766]: Selected source 149.56.19.163 (2.rhel.pool.ntp.org) Feb 20 01:37:45 localhost chronyd[766]: System clock TAI offset set to 37 seconds Feb 20 01:37:45 localhost dracut[1440]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Feb 20 01:37:45 localhost dracut[1440]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Feb 20 01:37:45 localhost dracut[1440]: memstrack is not available Feb 20 01:37:45 localhost dracut[1440]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Feb 20 01:37:46 localhost dracut[1440]: *** Including module: systemd *** Feb 20 01:37:46 localhost dracut[1440]: *** Including module: systemd-initrd *** Feb 20 01:37:46 localhost dracut[1440]: *** Including module: i18n *** Feb 20 01:37:46 localhost dracut[1440]: No KEYMAP configured. Feb 20 01:37:46 localhost dracut[1440]: *** Including module: drm *** Feb 20 01:37:47 localhost dracut[1440]: *** Including module: prefixdevname *** Feb 20 01:37:47 localhost dracut[1440]: *** Including module: kernel-modules *** Feb 20 01:37:47 localhost dracut[1440]: *** Including module: kernel-modules-extra *** Feb 20 01:37:47 localhost dracut[1440]: *** Including module: qemu *** Feb 20 01:37:47 localhost dracut[1440]: *** Including module: fstab-sys *** Feb 20 01:37:47 localhost dracut[1440]: *** Including module: rootfs-block *** Feb 20 01:37:47 localhost dracut[1440]: *** Including module: terminfo *** Feb 20 01:37:47 localhost dracut[1440]: *** Including module: udev-rules *** Feb 20 01:37:48 localhost dracut[1440]: Skipping udev rule: 91-permissions.rules Feb 20 01:37:48 localhost dracut[1440]: Skipping udev rule: 80-drivers-modprobe.rules Feb 20 01:37:48 localhost dracut[1440]: *** Including module: virtiofs *** Feb 20 01:37:48 localhost dracut[1440]: *** Including module: dracut-systemd *** Feb 20 01:37:48 localhost dracut[1440]: *** Including module: usrmount *** Feb 20 01:37:48 localhost dracut[1440]: *** Including module: base *** Feb 20 01:37:48 localhost dracut[1440]: *** Including module: fs-lib *** Feb 20 01:37:48 localhost dracut[1440]: *** Including module: kdumpbase *** Feb 20 01:37:49 localhost dracut[1440]: *** Including module: microcode_ctl-fw_dir_override *** Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl module: mangling fw_dir Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: configuration "intel" is ignored Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: configuration "intel-06-2d-07" is ignored Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: configuration "intel-06-4e-03" is ignored Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: configuration "intel-06-4f-01" is ignored Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: configuration "intel-06-55-04" is ignored Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: configuration "intel-06-5e-03" is ignored Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: configuration "intel-06-8c-01" is ignored Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored Feb 20 01:37:49 localhost dracut[1440]: microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware" Feb 20 01:37:49 localhost dracut[1440]: *** Including module: shutdown *** Feb 20 01:37:49 localhost dracut[1440]: *** Including module: squash *** Feb 20 01:37:49 localhost dracut[1440]: *** Including modules done *** Feb 20 01:37:49 localhost dracut[1440]: *** Installing kernel module dependencies *** Feb 20 01:37:50 localhost dracut[1440]: *** Installing kernel module dependencies done *** Feb 20 01:37:50 localhost dracut[1440]: *** Resolving executable dependencies *** Feb 20 01:37:51 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 20 01:37:51 localhost dracut[1440]: *** Resolving executable dependencies done *** Feb 20 01:37:51 localhost dracut[1440]: *** Hardlinking files *** Feb 20 01:37:51 localhost dracut[1440]: Mode: real Feb 20 01:37:51 localhost dracut[1440]: Files: 1099 Feb 20 01:37:51 localhost dracut[1440]: Linked: 3 files Feb 20 01:37:51 localhost dracut[1440]: Compared: 0 xattrs Feb 20 01:37:51 localhost dracut[1440]: Compared: 373 files Feb 20 01:37:51 localhost dracut[1440]: Saved: 61.04 KiB Feb 20 01:37:51 localhost dracut[1440]: Duration: 0.044052 seconds Feb 20 01:37:51 localhost dracut[1440]: *** Hardlinking files done *** Feb 20 01:37:51 localhost dracut[1440]: Could not find 'strip'. Not stripping the initramfs. Feb 20 01:37:51 localhost dracut[1440]: *** Generating early-microcode cpio image *** Feb 20 01:37:51 localhost dracut[1440]: *** Constructing AuthenticAMD.bin *** Feb 20 01:37:51 localhost dracut[1440]: *** Store current command line parameters *** Feb 20 01:37:51 localhost dracut[1440]: Stored kernel commandline: Feb 20 01:37:51 localhost dracut[1440]: No dracut internal kernel commandline stored in the initramfs Feb 20 01:37:51 localhost dracut[1440]: *** Install squash loader *** Feb 20 01:37:52 localhost dracut[1440]: *** Squashing the files inside the initramfs *** Feb 20 01:37:53 localhost dracut[1440]: *** Squashing the files inside the initramfs done *** Feb 20 01:37:53 localhost dracut[1440]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' *** Feb 20 01:37:53 localhost dracut[1440]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done *** Feb 20 01:37:54 localhost kdumpctl[1137]: kdump: kexec: loaded kdump kernel Feb 20 01:37:54 localhost kdumpctl[1137]: kdump: Starting kdump: [OK] Feb 20 01:37:54 localhost systemd[1]: Finished Crash recovery kernel arming. Feb 20 01:37:54 localhost systemd[1]: Startup finished in 1.313s (kernel) + 2.155s (initrd) + 16.948s (userspace) = 20.416s. Feb 20 01:38:10 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 20 01:38:17 localhost sshd[4176]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:38:17 localhost systemd[1]: Created slice User Slice of UID 1000. Feb 20 01:38:17 localhost systemd[1]: Starting User Runtime Directory /run/user/1000... Feb 20 01:38:17 localhost systemd-logind[760]: New session 1 of user zuul. Feb 20 01:38:17 localhost systemd[1]: Finished User Runtime Directory /run/user/1000. Feb 20 01:38:17 localhost systemd[1]: Starting User Manager for UID 1000... Feb 20 01:38:17 localhost systemd[4180]: Queued start job for default target Main User Target. Feb 20 01:38:17 localhost systemd[4180]: Created slice User Application Slice. Feb 20 01:38:17 localhost systemd[4180]: Started Mark boot as successful after the user session has run 2 minutes. Feb 20 01:38:17 localhost systemd[4180]: Started Daily Cleanup of User's Temporary Directories. Feb 20 01:38:17 localhost systemd[4180]: Reached target Paths. Feb 20 01:38:17 localhost systemd[4180]: Reached target Timers. Feb 20 01:38:17 localhost systemd[4180]: Starting D-Bus User Message Bus Socket... Feb 20 01:38:17 localhost systemd[4180]: Starting Create User's Volatile Files and Directories... Feb 20 01:38:17 localhost systemd[4180]: Finished Create User's Volatile Files and Directories. Feb 20 01:38:17 localhost systemd[4180]: Listening on D-Bus User Message Bus Socket. Feb 20 01:38:17 localhost systemd[4180]: Reached target Sockets. Feb 20 01:38:17 localhost systemd[4180]: Reached target Basic System. Feb 20 01:38:17 localhost systemd[4180]: Reached target Main User Target. Feb 20 01:38:17 localhost systemd[4180]: Startup finished in 102ms. Feb 20 01:38:17 localhost systemd[1]: Started User Manager for UID 1000. Feb 20 01:38:17 localhost systemd[1]: Started Session 1 of User zuul. Feb 20 01:38:18 localhost python3[4232]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 01:38:25 localhost python3[4251]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 01:38:33 localhost python3[4304]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 01:38:34 localhost python3[4334]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present Feb 20 01:38:37 localhost python3[4350]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:38:38 localhost python3[4364]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:38:39 localhost python3[4423]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:38:39 localhost python3[4464]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771569519.3272943-387-256739232891293/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=2f497d714b8e492188c98067e59a9994_id_rsa follow=False checksum=1ede725f5cdca64ff103c7e62f7bb7b42f0b9244 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:38:41 localhost python3[4537]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:38:41 localhost python3[4578]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771569521.1661925-490-193150113043321/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=2f497d714b8e492188c98067e59a9994_id_rsa.pub follow=False checksum=d5896bb6dcd221ffe99ce3acccb68a5152af8369 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:38:43 localhost python3[4606]: ansible-ping Invoked with data=pong Feb 20 01:38:45 localhost python3[4620]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 01:38:49 localhost python3[4673]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None Feb 20 01:38:51 localhost python3[4695]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:38:51 localhost python3[4709]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:38:52 localhost python3[4723]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:38:53 localhost python3[4737]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:38:53 localhost python3[4751]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:38:53 localhost python3[4765]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:38:56 localhost python3[4781]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:38:58 localhost python3[4829]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:38:58 localhost python3[4872]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771569537.8296628-96-48665014965796/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:39:05 localhost python3[4901]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:06 localhost python3[4915]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:06 localhost python3[4929]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:06 localhost python3[4943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:06 localhost python3[4957]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:07 localhost python3[4971]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:07 localhost python3[4985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:07 localhost python3[4999]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:08 localhost python3[5013]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:08 localhost python3[5027]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:08 localhost python3[5041]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:08 localhost python3[5055]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:09 localhost python3[5069]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICWBreHW95Wz2Toz5YwCGQwFcUG8oFYkienDh9tntmDc ralfieri@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:09 localhost python3[5083]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:09 localhost python3[5097]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:10 localhost python3[5111]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:10 localhost python3[5125]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:10 localhost python3[5139]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:10 localhost python3[5153]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:11 localhost python3[5167]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:11 localhost python3[5181]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:11 localhost python3[5195]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:11 localhost python3[5209]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:12 localhost python3[5223]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:12 localhost python3[5237]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:12 localhost python3[5251]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:39:13 localhost python3[5267]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Feb 20 01:39:13 localhost systemd[1]: Starting Time & Date Service... Feb 20 01:39:13 localhost systemd[1]: Started Time & Date Service. Feb 20 01:39:14 localhost systemd-timedated[5269]: Changed time zone to 'UTC' (UTC). Feb 20 01:39:14 localhost python3[5288]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:39:15 localhost python3[5334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:39:16 localhost python3[5375]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1771569555.5993445-492-160742100495518/source _original_basename=tmpy9lolyuz follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:39:17 localhost python3[5435]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:39:17 localhost python3[5476]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1771569557.1435063-585-265017622973473/source _original_basename=tmpzw2b91zr follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:39:19 localhost python3[5538]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:39:19 localhost python3[5581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1771569559.147328-726-138853474712003/source _original_basename=tmp9a2_5zzy follow=False checksum=3282425cccee5824f32308a16b5801aeb1bf034e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:39:20 localhost python3[5609]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 01:39:21 localhost python3[5625]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 01:39:22 localhost python3[5675]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:39:22 localhost python3[5718]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1771569562.07106-852-130163937180615/source _original_basename=tmp39vikdou follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:39:23 localhost python3[5749]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-ff2a-a63c-000000000023-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 01:39:24 localhost python3[5767]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-ff2a-a63c-000000000024-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Feb 20 01:39:26 localhost python3[5785]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:39:44 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Feb 20 01:39:45 localhost python3[5804]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:40:22 localhost systemd[4180]: Starting Mark boot as successful... Feb 20 01:40:22 localhost systemd[4180]: Finished Mark boot as successful. Feb 20 01:40:45 localhost systemd-logind[760]: Session 1 logged out. Waiting for processes to exit. Feb 20 01:41:40 localhost systemd[1]: Unmounting EFI System Partition Automount... Feb 20 01:41:40 localhost systemd[1]: efi.mount: Deactivated successfully. Feb 20 01:41:40 localhost systemd[1]: Unmounted EFI System Partition Automount. Feb 20 01:42:54 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 Feb 20 01:42:54 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0x0000-0x003f] Feb 20 01:42:54 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff] Feb 20 01:42:54 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] Feb 20 01:42:54 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Feb 20 01:42:54 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref] Feb 20 01:42:54 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref] Feb 20 01:42:54 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff] Feb 20 01:42:54 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io 0x1000-0x103f] Feb 20 01:42:54 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) Feb 20 01:42:54 localhost NetworkManager[789]: [1771569774.6167] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Feb 20 01:42:54 localhost systemd-udevd[5810]: Network interface NamePolicy= disabled on kernel command line. Feb 20 01:42:54 localhost NetworkManager[789]: [1771569774.6309] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 20 01:42:54 localhost NetworkManager[789]: [1771569774.6331] settings: (eth1): created default wired connection 'Wired connection 1' Feb 20 01:42:54 localhost NetworkManager[789]: [1771569774.6335] device (eth1): carrier: link connected Feb 20 01:42:54 localhost NetworkManager[789]: [1771569774.6336] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Feb 20 01:42:54 localhost NetworkManager[789]: [1771569774.6340] policy: auto-activating connection 'Wired connection 1' (aa69ca8b-3c40-3c71-baf0-b8ebd671158c) Feb 20 01:42:54 localhost NetworkManager[789]: [1771569774.6344] device (eth1): Activation: starting connection 'Wired connection 1' (aa69ca8b-3c40-3c71-baf0-b8ebd671158c) Feb 20 01:42:54 localhost NetworkManager[789]: [1771569774.6345] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 20 01:42:54 localhost NetworkManager[789]: [1771569774.6347] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 20 01:42:54 localhost NetworkManager[789]: [1771569774.6351] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 20 01:42:54 localhost NetworkManager[789]: [1771569774.6354] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Feb 20 01:42:55 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Feb 20 01:42:55 localhost sshd[5813]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:42:55 localhost systemd-logind[760]: New session 3 of user zuul. Feb 20 01:42:55 localhost systemd[1]: Started Session 3 of User zuul. Feb 20 01:42:56 localhost python3[5830]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-fb18-e746-00000000039b-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 01:43:09 localhost python3[5882]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:43:09 localhost python3[5925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771569788.910285-435-157516460130144/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=cbddd1bd87533b161112349e7aef6d0837f1d5e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:43:10 localhost python3[5955]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 01:43:10 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully. Feb 20 01:43:10 localhost systemd[1]: Stopped Network Manager Wait Online. Feb 20 01:43:10 localhost systemd[1]: Stopping Network Manager Wait Online... Feb 20 01:43:10 localhost systemd[1]: Stopping Network Manager... Feb 20 01:43:10 localhost NetworkManager[789]: [1771569790.1372] caught SIGTERM, shutting down normally. Feb 20 01:43:10 localhost NetworkManager[789]: [1771569790.1485] dhcp4 (eth0): canceled DHCP transaction Feb 20 01:43:10 localhost NetworkManager[789]: [1771569790.1485] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Feb 20 01:43:10 localhost NetworkManager[789]: [1771569790.1486] dhcp4 (eth0): state changed no lease Feb 20 01:43:10 localhost NetworkManager[789]: [1771569790.1489] manager: NetworkManager state is now CONNECTING Feb 20 01:43:10 localhost NetworkManager[789]: [1771569790.1545] dhcp4 (eth1): canceled DHCP transaction Feb 20 01:43:10 localhost NetworkManager[789]: [1771569790.1546] dhcp4 (eth1): state changed no lease Feb 20 01:43:10 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 20 01:43:10 localhost NetworkManager[789]: [1771569790.1638] exiting (success) Feb 20 01:43:10 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 20 01:43:10 localhost systemd[1]: NetworkManager.service: Deactivated successfully. Feb 20 01:43:10 localhost systemd[1]: Stopped Network Manager. Feb 20 01:43:10 localhost systemd[1]: NetworkManager.service: Consumed 1.749s CPU time. Feb 20 01:43:10 localhost systemd[1]: Starting Network Manager... Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2092] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:7afa3bbe-1ce0-42be-9364-f30e30933a2e) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2096] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Feb 20 01:43:10 localhost systemd[1]: Started Network Manager. Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2114] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2152] manager[0x56036ebc7090]: monitoring kernel firmware directory '/lib/firmware'. Feb 20 01:43:10 localhost systemd[1]: Starting Network Manager Wait Online... Feb 20 01:43:10 localhost systemd[1]: Starting Hostname Service... Feb 20 01:43:10 localhost systemd[1]: Started Hostname Service. Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2804] hostname: hostname: using hostnamed Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2805] hostname: static hostname changed from (none) to "np0005625202.novalocal" Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2812] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2819] manager[0x56036ebc7090]: rfkill: Wi-Fi hardware radio set enabled Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2819] manager[0x56036ebc7090]: rfkill: WWAN hardware radio set enabled Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2862] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2863] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2864] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2865] manager: Networking is enabled by state file Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2873] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2874] settings: Loaded settings plugin: keyfile (internal) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2923] dhcp: init: Using DHCP client 'internal' Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2927] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2935] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2941] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2953] device (lo): Activation: starting connection 'lo' (e7a1daed-a466-41d4-a4a1-4e0a3e9eeee2) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2961] device (eth0): carrier: link connected Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2968] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2974] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2975] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2982] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2992] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.2999] device (eth1): carrier: link connected Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3005] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3012] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (aa69ca8b-3c40-3c71-baf0-b8ebd671158c) (indicated) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3012] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3019] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3028] device (eth1): Activation: starting connection 'Wired connection 1' (aa69ca8b-3c40-3c71-baf0-b8ebd671158c) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3054] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3060] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3062] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3066] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3070] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3073] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3077] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3081] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3089] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3094] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3105] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3109] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3169] dhcp4 (eth0): state changed new lease, address=38.102.83.159 Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3185] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3285] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3292] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3300] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3309] device (lo): Activation: successful, device activated. Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3355] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3358] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3365] manager: NetworkManager state is now CONNECTED_SITE Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3371] device (eth0): Activation: successful, device activated. Feb 20 01:43:10 localhost NetworkManager[5967]: [1771569790.3377] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 20 01:43:10 localhost python3[6030]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-fb18-e746-000000000120-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 01:43:20 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 20 01:43:22 localhost systemd[4180]: Created slice User Background Tasks Slice. Feb 20 01:43:22 localhost systemd[4180]: Starting Cleanup of User's Temporary Files and Directories... Feb 20 01:43:22 localhost systemd[4180]: Finished Cleanup of User's Temporary Files and Directories. Feb 20 01:43:40 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 20 01:43:55 localhost NetworkManager[5967]: [1771569835.7434] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Feb 20 01:43:55 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 20 01:43:55 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 20 01:43:55 localhost NetworkManager[5967]: [1771569835.7656] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Feb 20 01:43:55 localhost NetworkManager[5967]: [1771569835.7659] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Feb 20 01:43:55 localhost NetworkManager[5967]: [1771569835.7674] device (eth1): Activation: successful, device activated. Feb 20 01:43:55 localhost NetworkManager[5967]: [1771569835.7685] manager: startup complete Feb 20 01:43:55 localhost systemd[1]: Finished Network Manager Wait Online. Feb 20 01:44:05 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 20 01:44:10 localhost systemd[1]: session-3.scope: Deactivated successfully. Feb 20 01:44:10 localhost systemd[1]: session-3.scope: Consumed 1.442s CPU time. Feb 20 01:44:10 localhost systemd-logind[760]: Session 3 logged out. Waiting for processes to exit. Feb 20 01:44:10 localhost systemd-logind[760]: Removed session 3. Feb 20 01:45:01 localhost sshd[6058]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:45:22 localhost sshd[6060]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:45:22 localhost systemd-logind[760]: New session 4 of user zuul. Feb 20 01:45:22 localhost systemd[1]: Started Session 4 of User zuul. Feb 20 01:45:23 localhost python3[6111]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:45:23 localhost python3[6154]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771569923.0683374-628-244159041307993/source _original_basename=tmpqzqorwjj follow=False checksum=1adafc0c3cabf5458281c7d741082eddefa40194 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:45:28 localhost systemd[1]: session-4.scope: Deactivated successfully. Feb 20 01:45:28 localhost systemd-logind[760]: Session 4 logged out. Waiting for processes to exit. Feb 20 01:45:28 localhost systemd-logind[760]: Removed session 4. Feb 20 01:45:32 localhost sshd[6169]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:46:04 localhost sshd[6171]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:46:41 localhost sshd[6173]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:47:17 localhost sshd[6175]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:47:51 localhost sshd[6177]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:48:23 localhost sshd[6179]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:48:57 localhost sshd[6182]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:49:29 localhost sshd[6184]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:50:02 localhost sshd[6187]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:50:35 localhost sshd[6189]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:51:06 localhost sshd[6191]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:51:38 localhost sshd[6193]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:52:10 localhost sshd[6195]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:52:17 localhost sshd[6199]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:52:17 localhost systemd-logind[760]: New session 5 of user zuul. Feb 20 01:52:17 localhost systemd[1]: Started Session 5 of User zuul. Feb 20 01:52:17 localhost python3[6218]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-12ca-cc53-00000000219f-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 01:52:29 localhost python3[6238]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:52:29 localhost python3[6254]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:52:29 localhost python3[6270]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:52:29 localhost python3[6286]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:52:30 localhost python3[6302]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:52:32 localhost python3[6350]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:52:32 localhost python3[6393]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771570351.752595-661-203058380058311/source _original_basename=tmpjzxr0vo8 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:52:33 localhost python3[6423]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 01:52:33 localhost systemd[1]: Starting Cleanup of Temporary Directories... Feb 20 01:52:33 localhost systemd[1]: Reloading. Feb 20 01:52:34 localhost systemd-rc-local-generator[6444]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 01:52:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 01:52:34 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 20 01:52:34 localhost systemd[1]: Finished Cleanup of Temporary Directories. Feb 20 01:52:34 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 20 01:52:35 localhost python3[6472]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None Feb 20 01:52:36 localhost python3[6488]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 01:52:37 localhost python3[6506]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 01:52:37 localhost python3[6524]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 01:52:37 localhost python3[6542]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 01:52:38 localhost python3[6559]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init"; cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system"; cat /sys/fs/cgroup/system.slice/io.max; echo "user"; cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-12ca-cc53-0000000021a6-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 01:52:42 localhost sshd[6566]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:52:49 localhost python3[6581]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 01:52:52 localhost systemd[1]: session-5.scope: Deactivated successfully. Feb 20 01:52:52 localhost systemd[1]: session-5.scope: Consumed 3.951s CPU time. Feb 20 01:52:52 localhost systemd-logind[760]: Session 5 logged out. Waiting for processes to exit. Feb 20 01:52:52 localhost systemd-logind[760]: Removed session 5. Feb 20 01:53:14 localhost sshd[6585]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:53:46 localhost sshd[6588]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:53:47 localhost sshd[6590]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:53:47 localhost systemd-logind[760]: New session 6 of user zuul. Feb 20 01:53:47 localhost systemd[1]: Started Session 6 of User zuul. Feb 20 01:53:48 localhost systemd[1]: Starting RHSM dbus service... Feb 20 01:53:49 localhost systemd[1]: Started RHSM dbus service. Feb 20 01:53:49 localhost rhsm-service[6614]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Feb 20 01:53:49 localhost rhsm-service[6614]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Feb 20 01:53:49 localhost rhsm-service[6614]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Feb 20 01:53:49 localhost rhsm-service[6614]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Feb 20 01:53:51 localhost rhsm-service[6614]: INFO [subscription_manager.managerlib:90] Consumer created: np0005625202.novalocal (0ba74f8d-2cd1-4b95-b1d2-af85836f157b) Feb 20 01:53:51 localhost subscription-manager[6614]: Registered system with identity: 0ba74f8d-2cd1-4b95-b1d2-af85836f157b Feb 20 01:53:51 localhost rhsm-service[6614]: INFO [subscription_manager.entcertlib:131] certs updated: Feb 20 01:53:51 localhost rhsm-service[6614]: Total updates: 1 Feb 20 01:53:51 localhost rhsm-service[6614]: Found (local) serial# [] Feb 20 01:53:51 localhost rhsm-service[6614]: Expected (UEP) serial# [1866927373557470825] Feb 20 01:53:51 localhost rhsm-service[6614]: Added (new) Feb 20 01:53:51 localhost rhsm-service[6614]: [sn:1866927373557470825 ( Content Access,) @ /etc/pki/entitlement/1866927373557470825.pem] Feb 20 01:53:51 localhost rhsm-service[6614]: Deleted (rogue): Feb 20 01:53:51 localhost rhsm-service[6614]: Feb 20 01:53:51 localhost subscription-manager[6614]: Added subscription for 'Content Access' contract 'None' Feb 20 01:53:51 localhost subscription-manager[6614]: Added subscription for product ' Content Access' Feb 20 01:53:52 localhost rhsm-service[6614]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Feb 20 01:53:52 localhost rhsm-service[6614]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Feb 20 01:53:52 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 01:53:52 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 01:53:52 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 01:53:53 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 01:53:53 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 01:54:00 localhost python3[6705]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-d2eb-5884-00000000000d-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 01:54:11 localhost python3[6725]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 01:54:17 localhost sshd[6732]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:54:42 localhost setsebool[6802]: The virt_use_nfs policy boolean was changed to 1 by root Feb 20 01:54:42 localhost setsebool[6802]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Feb 20 01:54:48 localhost sshd[6812]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:54:50 localhost kernel: SELinux: Converting 406 SID table entries... Feb 20 01:54:50 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 01:54:50 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 01:54:50 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 01:54:50 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 01:54:50 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 01:54:50 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 01:54:50 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 01:55:05 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=3 res=1 Feb 20 01:55:05 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 01:55:05 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 01:55:05 localhost systemd[1]: Reloading. Feb 20 01:55:05 localhost systemd-rc-local-generator[7640]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 01:55:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 01:55:05 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 20 01:55:06 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 01:55:06 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 01:55:13 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 01:55:13 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 01:55:13 localhost systemd[1]: man-db-cache-update.service: Consumed 9.584s CPU time. Feb 20 01:55:13 localhost systemd[1]: run-rfcf6b2bb65df4f9a932c094195b9d292.service: Deactivated successfully. Feb 20 01:55:20 localhost sshd[18383]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:55:51 localhost sshd[18385]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:55:58 localhost podman[18403]: 2026-02-20 06:55:58.581059252 +0000 UTC m=+0.095710992 system refresh Feb 20 01:55:59 localhost systemd[4180]: Starting D-Bus User Message Bus... Feb 20 01:55:59 localhost dbus-broker-launch[18460]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Feb 20 01:55:59 localhost dbus-broker-launch[18460]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Feb 20 01:55:59 localhost systemd[4180]: Started D-Bus User Message Bus. Feb 20 01:55:59 localhost journal[18460]: Ready Feb 20 01:55:59 localhost systemd[4180]: selinux: avc: op=load_policy lsm=selinux seqno=3 res=1 Feb 20 01:55:59 localhost systemd[4180]: Created slice Slice /user. Feb 20 01:55:59 localhost systemd[4180]: podman-18444.scope: unit configures an IP firewall, but not running as root. Feb 20 01:55:59 localhost systemd[4180]: (This warning is only shown for the first unit using IP firewalling.) Feb 20 01:55:59 localhost systemd[4180]: Started podman-18444.scope. Feb 20 01:55:59 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 01:55:59 localhost systemd[4180]: Started podman-pause-938369c7.scope. Feb 20 01:56:03 localhost systemd[1]: session-6.scope: Deactivated successfully. Feb 20 01:56:03 localhost systemd[1]: session-6.scope: Consumed 51.767s CPU time. Feb 20 01:56:03 localhost systemd-logind[760]: Session 6 logged out. Waiting for processes to exit. Feb 20 01:56:03 localhost systemd-logind[760]: Removed session 6. Feb 20 01:56:18 localhost sshd[18467]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:56:18 localhost sshd[18466]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:56:18 localhost sshd[18468]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:56:18 localhost sshd[18469]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:56:18 localhost sshd[18465]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:56:23 localhost sshd[18475]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:56:23 localhost sshd[18477]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:56:23 localhost systemd-logind[760]: New session 7 of user zuul. Feb 20 01:56:23 localhost systemd[1]: Started Session 7 of User zuul. Feb 20 01:56:23 localhost python3[18494]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHF6ws6TTGIgpcynk+zfDmAiKAngdz4qTSYI5OZYL/Nj9dQsVH9D0sSlKxQpeRN7puQyuA81owKWTQGJzf43DRQ= zuul@np0005625196.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:56:24 localhost python3[18510]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHF6ws6TTGIgpcynk+zfDmAiKAngdz4qTSYI5OZYL/Nj9dQsVH9D0sSlKxQpeRN7puQyuA81owKWTQGJzf43DRQ= zuul@np0005625196.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:56:26 localhost systemd[1]: session-7.scope: Deactivated successfully. Feb 20 01:56:26 localhost systemd-logind[760]: Session 7 logged out. Waiting for processes to exit. Feb 20 01:56:26 localhost systemd-logind[760]: Removed session 7. Feb 20 01:56:54 localhost sshd[18511]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:57:26 localhost sshd[18513]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:57:45 localhost sshd[18516]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:57:45 localhost systemd-logind[760]: New session 8 of user zuul. Feb 20 01:57:45 localhost systemd[1]: Started Session 8 of User zuul. Feb 20 01:57:45 localhost python3[18535]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 01:57:46 localhost python3[18551]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005625202.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Feb 20 01:57:48 localhost python3[18601]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:57:48 localhost python3[18644]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771570667.7682095-133-190342736195482/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=2f497d714b8e492188c98067e59a9994_id_rsa follow=False checksum=1ede725f5cdca64ff103c7e62f7bb7b42f0b9244 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:57:49 localhost python3[18706]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:57:50 localhost python3[18749]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771570669.4113889-221-25401637264336/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=2f497d714b8e492188c98067e59a9994_id_rsa.pub follow=False checksum=d5896bb6dcd221ffe99ce3acccb68a5152af8369 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:57:52 localhost python3[18779]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:57:53 localhost python3[18825]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:57:53 localhost python3[18841]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmpe7xhzbtu recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:57:54 localhost python3[18901]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:57:54 localhost python3[18917]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmpsdkjlo8i recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:57:56 localhost python3[18977]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 01:57:56 localhost python3[18993]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmpcwj423aq recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 01:57:57 localhost systemd[1]: session-8.scope: Deactivated successfully. Feb 20 01:57:57 localhost systemd[1]: session-8.scope: Consumed 3.406s CPU time. Feb 20 01:57:57 localhost systemd-logind[760]: Session 8 logged out. Waiting for processes to exit. Feb 20 01:57:57 localhost systemd-logind[760]: Removed session 8. Feb 20 01:58:01 localhost sshd[19009]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:58:44 localhost sshd[19011]: main: sshd: ssh-rsa algorithm is disabled Feb 20 01:59:25 localhost sshd[19013]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:00:01 localhost sshd[19016]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:00:05 localhost sshd[19018]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:00:05 localhost systemd-logind[760]: New session 9 of user zuul. Feb 20 02:00:05 localhost systemd[1]: Started Session 9 of User zuul. Feb 20 02:00:06 localhost python3[19064]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:00:36 localhost sshd[19066]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:01:11 localhost sshd[19083]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:01:48 localhost sshd[19086]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:02:26 localhost sshd[19088]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:03:04 localhost sshd[19090]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:03:07 localhost systemd[1]: Starting dnf makecache... Feb 20 02:03:07 localhost dnf[19092]: Updating Subscription Management repositories. Feb 20 02:03:08 localhost dnf[19092]: Failed determining last makecache time. Feb 20 02:03:09 localhost dnf[19092]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 30 kB/s | 4.1 kB 00:00 Feb 20 02:03:09 localhost dnf[19092]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 46 kB/s | 4.1 kB 00:00 Feb 20 02:03:09 localhost dnf[19092]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 50 kB/s | 4.5 kB 00:00 Feb 20 02:03:09 localhost dnf[19092]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 55 kB/s | 4.5 kB 00:00 Feb 20 02:03:09 localhost dnf[19092]: Metadata cache created. Feb 20 02:03:10 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Feb 20 02:03:10 localhost systemd[1]: Finished dnf makecache. Feb 20 02:03:10 localhost systemd[1]: dnf-makecache.service: Consumed 2.676s CPU time. Feb 20 02:03:42 localhost sshd[19097]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:04:38 localhost sshd[19099]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:05:05 localhost systemd[1]: session-9.scope: Deactivated successfully. Feb 20 02:05:05 localhost systemd-logind[760]: Session 9 logged out. Waiting for processes to exit. Feb 20 02:05:05 localhost systemd-logind[760]: Removed session 9. Feb 20 02:05:39 localhost sshd[19104]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:05:43 localhost sshd[19106]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:06:21 localhost sshd[19108]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:06:59 localhost sshd[19110]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:07:37 localhost sshd[19113]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:08:14 localhost sshd[19115]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:08:51 localhost sshd[19117]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:09:29 localhost sshd[19119]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:09:44 localhost sshd[19121]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:10:07 localhost sshd[19122]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:10:35 localhost sshd[19125]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:10:44 localhost sshd[19127]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:11:16 localhost sshd[19129]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:11:23 localhost sshd[19133]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:11:23 localhost systemd-logind[760]: New session 10 of user zuul. Feb 20 02:11:23 localhost systemd[1]: Started Session 10 of User zuul. Feb 20 02:11:23 localhost python3[19150]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-064b-165c-00000000000c-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:11:25 localhost python3[19170]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163ef9-e89a-064b-165c-00000000000d-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:11:30 localhost python3[19189]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False Feb 20 02:11:33 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:11:33 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:11:48 localhost sshd[19327]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:12:20 localhost sshd[19333]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:12:25 localhost python3[19351]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False Feb 20 02:12:28 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:12:36 localhost python3[19550]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False Feb 20 02:12:39 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:12:39 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:12:43 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:12:51 localhost sshd[19869]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:12:52 localhost sshd[19871]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:13:04 localhost python3[19888]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False Feb 20 02:13:07 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:13:10 localhost sshd[20074]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:13:11 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:13:12 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:13:24 localhost sshd[20210]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:13:32 localhost python3[20227]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False Feb 20 02:13:35 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:13:35 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:13:40 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:13:53 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:13:56 localhost sshd[20726]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:14:02 localhost python3[20744]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-064b-165c-000000000013-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:14:04 localhost sshd[20747]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:14:07 localhost python3[20765]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:14:26 localhost kernel: SELinux: Converting 488 SID table entries... Feb 20 02:14:26 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 02:14:26 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 02:14:26 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 02:14:26 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 02:14:26 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 02:14:26 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 02:14:26 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 02:14:26 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=4 res=1 Feb 20 02:14:26 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Feb 20 02:14:30 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 02:14:30 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 02:14:30 localhost systemd[1]: Reloading. Feb 20 02:14:30 localhost systemd-rc-local-generator[21412]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:14:30 localhost systemd-sysv-generator[21415]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:14:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:14:30 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 20 02:14:31 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 02:14:31 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 02:14:31 localhost systemd[1]: run-r4f4a5550ecbd47eb8f04757e6db25b75.service: Deactivated successfully. Feb 20 02:14:32 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:14:32 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 02:14:42 localhost sshd[22054]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:14:58 localhost python3[22072]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-064b-165c-000000000015-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:14:58 localhost sshd[22075]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:15:34 localhost sshd[22079]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:15:34 localhost python3[22095]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:15:35 localhost python3[22144]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:15:35 localhost python3[22187]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771571734.988644-291-276299298814341/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=3358dfc6c6ce646155135d0cad900026cb34ba08 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:15:37 localhost python3[22217]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 20 02:15:37 localhost systemd-journald[618]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation. Feb 20 02:15:37 localhost systemd-journald[618]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 20 02:15:37 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 02:15:37 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 02:15:37 localhost python3[22238]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 20 02:15:37 localhost python3[22258]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 20 02:15:38 localhost python3[22278]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 20 02:15:38 localhost python3[22298]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Feb 20 02:15:40 localhost python3[22318]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 02:15:41 localhost systemd[1]: Starting LSB: Bring up/down networking... Feb 20 02:15:41 localhost network[22321]: WARN : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 02:15:41 localhost network[22332]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 02:15:41 localhost network[22321]: WARN : [network] 'network-scripts' will be removed from distribution in near future. Feb 20 02:15:41 localhost network[22333]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:15:41 localhost network[22321]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Feb 20 02:15:41 localhost network[22334]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 02:15:41 localhost NetworkManager[5967]: [1771571741.8046] audit: op="connections-reload" pid=22362 uid=0 result="success" Feb 20 02:15:41 localhost network[22321]: Bringing up loopback interface: [ OK ] Feb 20 02:15:42 localhost NetworkManager[5967]: [1771571742.0107] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22450 uid=0 result="success" Feb 20 02:15:42 localhost network[22321]: Bringing up interface eth0: [ OK ] Feb 20 02:15:42 localhost systemd[1]: Started LSB: Bring up/down networking. Feb 20 02:15:42 localhost python3[22491]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 02:15:42 localhost systemd[1]: Starting Open vSwitch Database Unit... Feb 20 02:15:42 localhost chown[22495]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Feb 20 02:15:42 localhost ovs-ctl[22500]: /etc/openvswitch/conf.db does not exist ... (warning). Feb 20 02:15:42 localhost ovs-ctl[22500]: Creating empty database /etc/openvswitch/conf.db [ OK ] Feb 20 02:15:42 localhost ovs-ctl[22500]: Starting ovsdb-server [ OK ] Feb 20 02:15:42 localhost ovs-vsctl[22550]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Feb 20 02:15:42 localhost ovs-vsctl[22570]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"0a83b6be-9fe2-42ef-8768-88847d97b165\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\"" Feb 20 02:15:42 localhost ovs-ctl[22500]: Configuring Open vSwitch system IDs [ OK ] Feb 20 02:15:42 localhost ovs-vsctl[22576]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005625202.novalocal Feb 20 02:15:42 localhost ovs-ctl[22500]: Enabling remote OVSDB managers [ OK ] Feb 20 02:15:42 localhost systemd[1]: Started Open vSwitch Database Unit. Feb 20 02:15:42 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Feb 20 02:15:42 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Feb 20 02:15:42 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Feb 20 02:15:43 localhost kernel: openvswitch: Open vSwitch switching datapath Feb 20 02:15:43 localhost ovs-ctl[22620]: Inserting openvswitch module [ OK ] Feb 20 02:15:43 localhost ovs-ctl[22589]: Starting ovs-vswitchd [ OK ] Feb 20 02:15:43 localhost ovs-vsctl[22638]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005625202.novalocal Feb 20 02:15:43 localhost ovs-ctl[22589]: Enabling remote OVSDB managers [ OK ] Feb 20 02:15:43 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Feb 20 02:15:43 localhost systemd[1]: Starting Open vSwitch... Feb 20 02:15:43 localhost systemd[1]: Finished Open vSwitch. Feb 20 02:15:45 localhost python3[22656]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-064b-165c-00000000001a-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:15:46 localhost NetworkManager[5967]: [1771571746.7266] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22852 uid=0 result="success" Feb 20 02:15:46 localhost ifup[22853]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:15:46 localhost ifup[22854]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:15:46 localhost ifup[22855]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:15:46 localhost NetworkManager[5967]: [1771571746.7596] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22861 uid=0 result="success" Feb 20 02:15:46 localhost ovs-vsctl[22863]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:f4:f9:b0 -- set bridge br-ex fail_mode=standalone -- del-controller br-ex Feb 20 02:15:46 localhost kernel: device ovs-system entered promiscuous mode Feb 20 02:15:46 localhost NetworkManager[5967]: [1771571746.7866] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4) Feb 20 02:15:46 localhost kernel: Timeout policy base is empty Feb 20 02:15:46 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Feb 20 02:15:46 localhost systemd-udevd[22865]: Network interface NamePolicy= disabled on kernel command line. Feb 20 02:15:46 localhost systemd-udevd[22880]: Network interface NamePolicy= disabled on kernel command line. Feb 20 02:15:46 localhost kernel: device br-ex entered promiscuous mode Feb 20 02:15:46 localhost NetworkManager[5967]: [1771571746.8287] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Feb 20 02:15:46 localhost NetworkManager[5967]: [1771571746.8553] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22890 uid=0 result="success" Feb 20 02:15:46 localhost NetworkManager[5967]: [1771571746.8783] device (br-ex): carrier: link connected Feb 20 02:15:49 localhost NetworkManager[5967]: [1771571749.9354] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22919 uid=0 result="success" Feb 20 02:15:49 localhost NetworkManager[5967]: [1771571749.9825] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22934 uid=0 result="success" Feb 20 02:15:50 localhost NET[22959]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf Feb 20 02:15:50 localhost NetworkManager[5967]: [1771571750.0744] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed') Feb 20 02:15:50 localhost NetworkManager[5967]: [1771571750.0835] dhcp4 (eth1): canceled DHCP transaction Feb 20 02:15:50 localhost NetworkManager[5967]: [1771571750.0835] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Feb 20 02:15:50 localhost NetworkManager[5967]: [1771571750.0835] dhcp4 (eth1): state changed no lease Feb 20 02:15:50 localhost NetworkManager[5967]: [1771571750.0872] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22968 uid=0 result="success" Feb 20 02:15:50 localhost ifup[22969]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:15:50 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 20 02:15:50 localhost ifup[22970]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:15:50 localhost ifup[22972]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:15:50 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 20 02:15:50 localhost NetworkManager[5967]: [1771571750.1240] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22985 uid=0 result="success" Feb 20 02:15:50 localhost NetworkManager[5967]: [1771571750.2177] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22996 uid=0 result="success" Feb 20 02:15:50 localhost NetworkManager[5967]: [1771571750.2249] device (eth1): carrier: link connected Feb 20 02:15:50 localhost NetworkManager[5967]: [1771571750.2472] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=23005 uid=0 result="success" Feb 20 02:15:50 localhost ipv6_wait_tentative[23017]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Feb 20 02:15:51 localhost ipv6_wait_tentative[23022]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Feb 20 02:15:52 localhost NetworkManager[5967]: [1771571752.3286] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=23031 uid=0 result="success" Feb 20 02:15:52 localhost ovs-vsctl[23046]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1 Feb 20 02:15:52 localhost kernel: device eth1 entered promiscuous mode Feb 20 02:15:52 localhost NetworkManager[5967]: [1771571752.4053] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=23054 uid=0 result="success" Feb 20 02:15:52 localhost ifup[23055]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:15:52 localhost ifup[23056]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:15:52 localhost ifup[23057]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:15:52 localhost NetworkManager[5967]: [1771571752.4390] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=23063 uid=0 result="success" Feb 20 02:15:52 localhost NetworkManager[5967]: [1771571752.4849] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23073 uid=0 result="success" Feb 20 02:15:52 localhost ifup[23074]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:15:52 localhost ifup[23075]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:15:52 localhost ifup[23076]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:15:52 localhost NetworkManager[5967]: [1771571752.5166] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23082 uid=0 result="success" Feb 20 02:15:52 localhost ovs-vsctl[23085]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Feb 20 02:15:52 localhost kernel: device vlan44 entered promiscuous mode Feb 20 02:15:52 localhost NetworkManager[5967]: [1771571752.5577] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/6) Feb 20 02:15:52 localhost systemd-udevd[23087]: Network interface NamePolicy= disabled on kernel command line. Feb 20 02:15:52 localhost NetworkManager[5967]: [1771571752.5831] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23096 uid=0 result="success" Feb 20 02:15:52 localhost NetworkManager[5967]: [1771571752.6048] device (vlan44): carrier: link connected Feb 20 02:15:55 localhost NetworkManager[5967]: [1771571755.6586] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23126 uid=0 result="success" Feb 20 02:15:55 localhost NetworkManager[5967]: [1771571755.7049] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23141 uid=0 result="success" Feb 20 02:15:55 localhost NetworkManager[5967]: [1771571755.7630] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23162 uid=0 result="success" Feb 20 02:15:55 localhost ifup[23163]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:15:55 localhost ifup[23164]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:15:55 localhost ifup[23165]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:15:55 localhost NetworkManager[5967]: [1771571755.7952] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23171 uid=0 result="success" Feb 20 02:15:55 localhost ovs-vsctl[23174]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Feb 20 02:15:55 localhost kernel: device vlan22 entered promiscuous mode Feb 20 02:15:55 localhost NetworkManager[5967]: [1771571755.8362] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/7) Feb 20 02:15:55 localhost systemd-udevd[23176]: Network interface NamePolicy= disabled on kernel command line. Feb 20 02:15:55 localhost NetworkManager[5967]: [1771571755.8625] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23186 uid=0 result="success" Feb 20 02:15:55 localhost NetworkManager[5967]: [1771571755.8832] device (vlan22): carrier: link connected Feb 20 02:15:58 localhost NetworkManager[5967]: [1771571758.9344] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23216 uid=0 result="success" Feb 20 02:15:58 localhost NetworkManager[5967]: [1771571758.9819] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23231 uid=0 result="success" Feb 20 02:15:59 localhost NetworkManager[5967]: [1771571759.0458] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23252 uid=0 result="success" Feb 20 02:15:59 localhost ifup[23253]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:15:59 localhost ifup[23254]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:15:59 localhost ifup[23255]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:15:59 localhost NetworkManager[5967]: [1771571759.0796] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23261 uid=0 result="success" Feb 20 02:15:59 localhost ovs-vsctl[23264]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Feb 20 02:15:59 localhost kernel: device vlan21 entered promiscuous mode Feb 20 02:15:59 localhost systemd-udevd[23266]: Network interface NamePolicy= disabled on kernel command line. Feb 20 02:15:59 localhost NetworkManager[5967]: [1771571759.1193] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/8) Feb 20 02:15:59 localhost NetworkManager[5967]: [1771571759.1468] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23276 uid=0 result="success" Feb 20 02:15:59 localhost NetworkManager[5967]: [1771571759.1686] device (vlan21): carrier: link connected Feb 20 02:16:00 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 20 02:16:02 localhost NetworkManager[5967]: [1771571762.2235] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23306 uid=0 result="success" Feb 20 02:16:02 localhost NetworkManager[5967]: [1771571762.2738] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23321 uid=0 result="success" Feb 20 02:16:02 localhost NetworkManager[5967]: [1771571762.3349] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23342 uid=0 result="success" Feb 20 02:16:02 localhost ifup[23343]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:16:02 localhost ifup[23344]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:16:02 localhost ifup[23345]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:16:02 localhost NetworkManager[5967]: [1771571762.3696] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23351 uid=0 result="success" Feb 20 02:16:02 localhost ovs-vsctl[23354]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Feb 20 02:16:02 localhost kernel: device vlan20 entered promiscuous mode Feb 20 02:16:02 localhost systemd-udevd[23356]: Network interface NamePolicy= disabled on kernel command line. Feb 20 02:16:02 localhost NetworkManager[5967]: [1771571762.4095] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/9) Feb 20 02:16:02 localhost NetworkManager[5967]: [1771571762.4345] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23366 uid=0 result="success" Feb 20 02:16:02 localhost NetworkManager[5967]: [1771571762.4558] device (vlan20): carrier: link connected Feb 20 02:16:05 localhost NetworkManager[5967]: [1771571765.5051] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23396 uid=0 result="success" Feb 20 02:16:05 localhost NetworkManager[5967]: [1771571765.5517] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23411 uid=0 result="success" Feb 20 02:16:05 localhost NetworkManager[5967]: [1771571765.6021] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23432 uid=0 result="success" Feb 20 02:16:05 localhost ifup[23433]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:16:05 localhost ifup[23434]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:16:05 localhost ifup[23435]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:16:05 localhost NetworkManager[5967]: [1771571765.6327] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23441 uid=0 result="success" Feb 20 02:16:05 localhost ovs-vsctl[23444]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Feb 20 02:16:05 localhost systemd-udevd[23446]: Network interface NamePolicy= disabled on kernel command line. Feb 20 02:16:05 localhost kernel: device vlan23 entered promiscuous mode Feb 20 02:16:05 localhost NetworkManager[5967]: [1771571765.6685] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/10) Feb 20 02:16:05 localhost NetworkManager[5967]: [1771571765.6952] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23456 uid=0 result="success" Feb 20 02:16:05 localhost NetworkManager[5967]: [1771571765.7147] device (vlan23): carrier: link connected Feb 20 02:16:08 localhost NetworkManager[5967]: [1771571768.7652] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23486 uid=0 result="success" Feb 20 02:16:08 localhost NetworkManager[5967]: [1771571768.8138] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23501 uid=0 result="success" Feb 20 02:16:08 localhost NetworkManager[5967]: [1771571768.8733] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23522 uid=0 result="success" Feb 20 02:16:08 localhost ifup[23523]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:16:08 localhost ifup[23524]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:16:08 localhost ifup[23525]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:16:08 localhost NetworkManager[5967]: [1771571768.9022] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23531 uid=0 result="success" Feb 20 02:16:08 localhost ovs-vsctl[23534]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Feb 20 02:16:08 localhost NetworkManager[5967]: [1771571768.9642] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23541 uid=0 result="success" Feb 20 02:16:10 localhost NetworkManager[5967]: [1771571770.0230] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23568 uid=0 result="success" Feb 20 02:16:10 localhost NetworkManager[5967]: [1771571770.0718] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23583 uid=0 result="success" Feb 20 02:16:10 localhost NetworkManager[5967]: [1771571770.1331] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23604 uid=0 result="success" Feb 20 02:16:10 localhost ifup[23605]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:16:10 localhost ifup[23606]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:16:10 localhost ifup[23607]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:16:10 localhost NetworkManager[5967]: [1771571770.1662] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23613 uid=0 result="success" Feb 20 02:16:10 localhost ovs-vsctl[23616]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Feb 20 02:16:10 localhost NetworkManager[5967]: [1771571770.2231] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23623 uid=0 result="success" Feb 20 02:16:11 localhost NetworkManager[5967]: [1771571771.2808] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23651 uid=0 result="success" Feb 20 02:16:11 localhost NetworkManager[5967]: [1771571771.3272] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23666 uid=0 result="success" Feb 20 02:16:11 localhost NetworkManager[5967]: [1771571771.3874] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23687 uid=0 result="success" Feb 20 02:16:11 localhost ifup[23688]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:16:11 localhost ifup[23689]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:16:11 localhost ifup[23690]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:16:11 localhost NetworkManager[5967]: [1771571771.4187] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23696 uid=0 result="success" Feb 20 02:16:11 localhost ovs-vsctl[23699]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Feb 20 02:16:11 localhost NetworkManager[5967]: [1771571771.4775] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23706 uid=0 result="success" Feb 20 02:16:12 localhost NetworkManager[5967]: [1771571772.5291] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23734 uid=0 result="success" Feb 20 02:16:12 localhost NetworkManager[5967]: [1771571772.5674] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23749 uid=0 result="success" Feb 20 02:16:12 localhost NetworkManager[5967]: [1771571772.6197] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23770 uid=0 result="success" Feb 20 02:16:12 localhost ifup[23771]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:16:12 localhost ifup[23772]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:16:12 localhost ifup[23773]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:16:12 localhost NetworkManager[5967]: [1771571772.6469] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23779 uid=0 result="success" Feb 20 02:16:12 localhost ovs-vsctl[23782]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Feb 20 02:16:12 localhost NetworkManager[5967]: [1771571772.7032] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23789 uid=0 result="success" Feb 20 02:16:13 localhost NetworkManager[5967]: [1771571773.7589] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23817 uid=0 result="success" Feb 20 02:16:13 localhost NetworkManager[5967]: [1771571773.8022] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23832 uid=0 result="success" Feb 20 02:16:13 localhost NetworkManager[5967]: [1771571773.8568] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23853 uid=0 result="success" Feb 20 02:16:13 localhost ifup[23854]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Feb 20 02:16:13 localhost ifup[23855]: 'network-scripts' will be removed from distribution in near future. Feb 20 02:16:13 localhost ifup[23856]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Feb 20 02:16:13 localhost NetworkManager[5967]: [1771571773.8858] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23862 uid=0 result="success" Feb 20 02:16:13 localhost ovs-vsctl[23865]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Feb 20 02:16:13 localhost NetworkManager[5967]: [1771571773.9408] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23872 uid=0 result="success" Feb 20 02:16:14 localhost NetworkManager[5967]: [1771571774.9960] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23900 uid=0 result="success" Feb 20 02:16:15 localhost NetworkManager[5967]: [1771571775.0364] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23915 uid=0 result="success" Feb 20 02:16:18 localhost sshd[23933]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:16:32 localhost sshd[23935]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:16:41 localhost sshd[23937]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:16:45 localhost sshd[23939]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:16:57 localhost sshd[23941]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:17:08 localhost python3[23957]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-064b-165c-00000000001b-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:17:14 localhost python3[23976]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 02:17:14 localhost python3[23992]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 02:17:16 localhost python3[24006]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 02:17:16 localhost python3[24022]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Feb 20 02:17:17 localhost python3[24036]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname Feb 20 02:17:18 localhost python3[24051]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005625202.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-064b-165c-000000000022-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:17:19 localhost python3[24071]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-064b-165c-000000000023-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:17:19 localhost systemd[1]: Starting Hostname Service... Feb 20 02:17:19 localhost systemd[1]: Started Hostname Service. Feb 20 02:17:19 localhost systemd-hostnamed[24075]: Hostname set to (static) Feb 20 02:17:19 localhost NetworkManager[5967]: [1771571839.3595] hostname: static hostname changed from "np0005625202.novalocal" to "np0005625202.localdomain" Feb 20 02:17:19 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 20 02:17:19 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Feb 20 02:17:20 localhost systemd[1]: session-10.scope: Deactivated successfully. Feb 20 02:17:20 localhost systemd[1]: session-10.scope: Consumed 1min 43.500s CPU time. Feb 20 02:17:20 localhost systemd-logind[760]: Session 10 logged out. Waiting for processes to exit. Feb 20 02:17:20 localhost systemd-logind[760]: Removed session 10. Feb 20 02:17:23 localhost sshd[24086]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:17:23 localhost systemd-logind[760]: New session 11 of user zuul. Feb 20 02:17:23 localhost systemd[1]: Started Session 11 of User zuul. Feb 20 02:17:23 localhost python3[24103]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Feb 20 02:17:25 localhost systemd[1]: session-11.scope: Deactivated successfully. Feb 20 02:17:25 localhost systemd-logind[760]: Session 11 logged out. Waiting for processes to exit. Feb 20 02:17:25 localhost systemd-logind[760]: Removed session 11. Feb 20 02:17:29 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 20 02:17:29 localhost sshd[24104]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:17:49 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 20 02:17:55 localhost sshd[24109]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:18:01 localhost sshd[24111]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:18:02 localhost sshd[24113]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:18:02 localhost systemd-logind[760]: New session 12 of user zuul. Feb 20 02:18:02 localhost systemd[1]: Started Session 12 of User zuul. Feb 20 02:18:03 localhost python3[24132]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:18:06 localhost systemd[1]: Reloading. Feb 20 02:18:07 localhost systemd-rc-local-generator[24172]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:18:07 localhost systemd-sysv-generator[24177]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:18:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:18:07 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Feb 20 02:18:07 localhost systemd[1]: Reloading. Feb 20 02:18:07 localhost systemd-rc-local-generator[24210]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:18:07 localhost systemd-sysv-generator[24214]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:18:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:18:07 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Feb 20 02:18:07 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Feb 20 02:18:07 localhost systemd[1]: Reloading. Feb 20 02:18:07 localhost systemd-rc-local-generator[24252]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:18:07 localhost systemd-sysv-generator[24258]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:18:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:18:07 localhost systemd[1]: Listening on LVM2 poll daemon socket. Feb 20 02:18:08 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 02:18:08 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 02:18:08 localhost systemd[1]: Reloading. Feb 20 02:18:08 localhost systemd-rc-local-generator[24310]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:18:08 localhost systemd-sysv-generator[24313]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:18:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:18:08 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 20 02:18:08 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 02:18:08 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 02:18:08 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 02:18:08 localhost systemd[1]: run-r7b34d5936c5e4300b34a75ba32b65759.service: Deactivated successfully. Feb 20 02:18:08 localhost systemd[1]: run-r5473366e7fd1447f8a8c4591821d7000.service: Deactivated successfully. Feb 20 02:18:19 localhost sshd[24904]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:18:33 localhost sshd[24906]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:19:05 localhost sshd[24908]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:19:09 localhost systemd-logind[760]: Session 12 logged out. Waiting for processes to exit. Feb 20 02:19:09 localhost systemd[1]: session-12.scope: Deactivated successfully. Feb 20 02:19:09 localhost systemd[1]: session-12.scope: Consumed 4.624s CPU time. Feb 20 02:19:09 localhost systemd-logind[760]: Removed session 12. Feb 20 02:19:37 localhost sshd[24911]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:19:51 localhost sshd[24913]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:19:59 localhost sshd[24915]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:20:06 localhost sshd[24917]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:20:17 localhost sshd[24919]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:20:39 localhost sshd[24921]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:20:59 localhost sshd[24923]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:21:14 localhost sshd[24925]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:21:31 localhost sshd[24927]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:21:37 localhost sshd[24929]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:22:09 localhost sshd[24931]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:22:32 localhost sshd[24933]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:22:40 localhost sshd[24935]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:23:00 localhost sshd[24938]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:23:12 localhost sshd[24941]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:23:39 localhost sshd[24943]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:23:44 localhost sshd[24945]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:23:50 localhost sshd[24947]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:24:16 localhost sshd[24949]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:24:27 localhost sshd[24951]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:24:47 localhost sshd[24953]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:25:03 localhost sshd[24955]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:25:04 localhost sshd[24957]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:25:19 localhost sshd[24959]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:25:21 localhost sshd[24961]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:25:44 localhost sshd[24963]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:26:00 localhost sshd[24964]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:26:18 localhost sshd[24966]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:26:51 localhost sshd[24968]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:26:53 localhost sshd[24970]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:27:32 localhost sshd[24972]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:27:38 localhost sshd[24974]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:28:06 localhost sshd[24977]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:28:25 localhost sshd[24979]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:28:28 localhost sshd[24981]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:28:34 localhost sshd[24983]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:28:46 localhost sshd[24985]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:29:25 localhost sshd[24987]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:29:41 localhost sshd[24989]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:29:42 localhost sshd[24990]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:30:21 localhost sshd[24991]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:30:22 localhost sshd[24993]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:30:24 localhost sshd[24995]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:31:13 localhost sshd[24997]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:31:38 localhost sshd[24999]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:31:53 localhost sshd[25001]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:31:56 localhost sshd[25003]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:32:07 localhost sshd[25005]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:32:10 localhost sshd[25007]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:32:42 localhost sshd[25009]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:33:02 localhost sshd[25011]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:33:49 localhost sshd[25014]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:33:52 localhost sshd[25016]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:33:58 localhost sshd[25018]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:34:48 localhost sshd[25021]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:34:54 localhost sshd[25023]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:35:02 localhost sshd[25025]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:35:03 localhost systemd-logind[760]: New session 13 of user zuul. Feb 20 02:35:03 localhost systemd[1]: Started Session 13 of User zuul. Feb 20 02:35:03 localhost python3[25073]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 02:35:05 localhost python3[25160]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:35:06 localhost sshd[25162]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:35:08 localhost python3[25179]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:35:09 localhost python3[25195]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:35:09 localhost kernel: loop: module loaded Feb 20 02:35:09 localhost kernel: loop3: detected capacity change from 0 to 14680064 Feb 20 02:35:09 localhost python3[25220]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:35:09 localhost lvm[25223]: PV /dev/loop3 not used. Feb 20 02:35:09 localhost lvm[25225]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 20 02:35:10 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0. Feb 20 02:35:10 localhost lvm[25232]: 1 logical volume(s) in volume group "ceph_vg0" now active Feb 20 02:35:10 localhost lvm[25235]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 20 02:35:10 localhost lvm[25235]: VG ceph_vg0 finished Feb 20 02:35:10 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully. Feb 20 02:35:10 localhost python3[25283]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:35:11 localhost python3[25326]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771572910.3531182-54714-249384020128843/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:35:12 localhost python3[25356]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:35:12 localhost systemd[1]: Reloading. Feb 20 02:35:12 localhost systemd-rc-local-generator[25381]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:35:12 localhost systemd-sysv-generator[25386]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:35:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:35:12 localhost systemd[1]: Starting Ceph OSD losetup... Feb 20 02:35:12 localhost bash[25398]: /dev/loop3: [64516]:8401550 (/var/lib/ceph-osd-0.img) Feb 20 02:35:12 localhost systemd[1]: Finished Ceph OSD losetup. Feb 20 02:35:12 localhost lvm[25399]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 20 02:35:12 localhost lvm[25399]: VG ceph_vg0 finished Feb 20 02:35:12 localhost python3[25416]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:35:15 localhost python3[25433]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:35:16 localhost python3[25449]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:35:16 localhost kernel: loop4: detected capacity change from 0 to 14680064 Feb 20 02:35:16 localhost python3[25471]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:35:17 localhost lvm[25474]: PV /dev/loop4 not used. Feb 20 02:35:17 localhost lvm[25484]: PV /dev/loop4 online, VG ceph_vg1 is complete. Feb 20 02:35:17 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1. Feb 20 02:35:17 localhost lvm[25486]: 1 logical volume(s) in volume group "ceph_vg1" now active Feb 20 02:35:17 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully. Feb 20 02:35:17 localhost python3[25534]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:35:18 localhost python3[25577]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771572917.5293367-54973-104920060637077/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:35:18 localhost python3[25607]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:35:18 localhost systemd[1]: Reloading. Feb 20 02:35:18 localhost systemd-rc-local-generator[25631]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:35:18 localhost systemd-sysv-generator[25635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:35:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:35:19 localhost systemd[1]: Starting Ceph OSD losetup... Feb 20 02:35:19 localhost bash[25648]: /dev/loop4: [64516]:8400144 (/var/lib/ceph-osd-1.img) Feb 20 02:35:19 localhost systemd[1]: Finished Ceph OSD losetup. Feb 20 02:35:19 localhost lvm[25649]: PV /dev/loop4 online, VG ceph_vg1 is complete. Feb 20 02:35:19 localhost lvm[25649]: VG ceph_vg1 finished Feb 20 02:35:28 localhost python3[25694]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Feb 20 02:35:29 localhost python3[25714]: ansible-hostname Invoked with name=np0005625202.localdomain use=None Feb 20 02:35:29 localhost systemd[1]: Starting Hostname Service... Feb 20 02:35:29 localhost systemd[1]: Started Hostname Service. Feb 20 02:35:29 localhost sshd[25722]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:35:31 localhost python3[25739]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Feb 20 02:35:32 localhost python3[25787]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.kzwt5z3rtmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:35:32 localhost python3[25818]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.kzwt5z3rtmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:35:33 localhost python3[25834]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.kzwt5z3rtmphosts insertbefore=BOF block=192.168.122.106 np0005625202.localdomain np0005625202#012192.168.122.106 np0005625202.ctlplane.localdomain np0005625202.ctlplane#012192.168.122.107 np0005625203.localdomain np0005625203#012192.168.122.107 np0005625203.ctlplane.localdomain np0005625203.ctlplane#012192.168.122.108 np0005625204.localdomain np0005625204#012192.168.122.108 np0005625204.ctlplane.localdomain np0005625204.ctlplane#012192.168.122.103 np0005625199.localdomain np0005625199#012192.168.122.103 np0005625199.ctlplane.localdomain np0005625199.ctlplane#012192.168.122.104 np0005625200.localdomain np0005625200#012192.168.122.104 np0005625200.ctlplane.localdomain np0005625200.ctlplane#012192.168.122.105 np0005625201.localdomain np0005625201#012192.168.122.105 np0005625201.ctlplane.localdomain np0005625201.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:35:33 localhost python3[25850]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.kzwt5z3rtmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:35:34 localhost python3[25867]: ansible-file Invoked with path=/tmp/ansible.kzwt5z3rtmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:35:36 localhost python3[25883]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:35:37 localhost python3[25901]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:35:41 localhost python3[25950]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:35:41 localhost sshd[25966]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:35:41 localhost python3[25997]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771572940.6167862-55769-139547502149091/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:35:42 localhost python3[26027]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:35:43 localhost python3[26045]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 02:35:43 localhost chronyd[766]: chronyd exiting Feb 20 02:35:43 localhost systemd[1]: Stopping NTP client/server... Feb 20 02:35:43 localhost systemd[1]: chronyd.service: Deactivated successfully. Feb 20 02:35:43 localhost systemd[1]: Stopped NTP client/server. Feb 20 02:35:43 localhost systemd[1]: chronyd.service: Consumed 119ms CPU time, read 1.9M from disk, written 0B to disk. Feb 20 02:35:43 localhost systemd[1]: Starting NTP client/server... Feb 20 02:35:43 localhost chronyd[26052]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 20 02:35:43 localhost chronyd[26052]: Frequency -26.188 +/- 0.446 ppm read from /var/lib/chrony/drift Feb 20 02:35:43 localhost chronyd[26052]: Loaded seccomp filter (level 2) Feb 20 02:35:43 localhost systemd[1]: Started NTP client/server. Feb 20 02:35:44 localhost python3[26101]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:35:44 localhost python3[26144]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771572943.961494-56001-257794471765382/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:35:45 localhost python3[26174]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:35:45 localhost systemd[1]: Reloading. Feb 20 02:35:45 localhost systemd-rc-local-generator[26197]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:35:45 localhost systemd-sysv-generator[26202]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:35:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:35:45 localhost systemd[1]: Reloading. Feb 20 02:35:45 localhost systemd-rc-local-generator[26240]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:35:45 localhost systemd-sysv-generator[26244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:35:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:35:45 localhost systemd[1]: Starting chronyd online sources service... Feb 20 02:35:45 localhost chronyc[26249]: 200 OK Feb 20 02:35:45 localhost systemd[1]: chrony-online.service: Deactivated successfully. Feb 20 02:35:45 localhost systemd[1]: Finished chronyd online sources service. Feb 20 02:35:46 localhost python3[26265]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:35:46 localhost chronyd[26052]: System clock was stepped by 0.000000 seconds Feb 20 02:35:46 localhost python3[26282]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:35:48 localhost chronyd[26052]: Selected source 209.227.173.244 (pool.ntp.org) Feb 20 02:35:57 localhost python3[26299]: ansible-timezone Invoked with name=UTC hwclock=None Feb 20 02:35:57 localhost systemd[1]: Starting Time & Date Service... Feb 20 02:35:57 localhost systemd[1]: Started Time & Date Service. Feb 20 02:35:58 localhost python3[26319]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 02:35:58 localhost systemd[1]: Stopping NTP client/server... Feb 20 02:35:58 localhost chronyd[26052]: chronyd exiting Feb 20 02:35:58 localhost systemd[1]: chronyd.service: Deactivated successfully. Feb 20 02:35:58 localhost systemd[1]: Stopped NTP client/server. Feb 20 02:35:58 localhost systemd[1]: Starting NTP client/server... Feb 20 02:35:58 localhost chronyd[26327]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 20 02:35:58 localhost chronyd[26327]: Frequency -26.188 +/- 0.446 ppm read from /var/lib/chrony/drift Feb 20 02:35:58 localhost chronyd[26327]: Loaded seccomp filter (level 2) Feb 20 02:35:58 localhost systemd[1]: Started NTP client/server. Feb 20 02:35:59 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 20 02:36:02 localhost chronyd[26327]: Selected source 23.133.168.245 (pool.ntp.org) Feb 20 02:36:09 localhost sshd[26332]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:36:20 localhost sshd[26527]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:36:27 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Feb 20 02:36:35 localhost sshd[26531]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:36:40 localhost sshd[26533]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:36:43 localhost sshd[26535]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:07 localhost sshd[26537]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:25 localhost sshd[26539]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:30 localhost sshd[26541]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:56 localhost sshd[26543]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:56 localhost systemd[1]: Created slice User Slice of UID 1002. Feb 20 02:37:56 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Feb 20 02:37:56 localhost systemd-logind[760]: New session 14 of user ceph-admin. Feb 20 02:37:56 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Feb 20 02:37:56 localhost systemd[1]: Starting User Manager for UID 1002... Feb 20 02:37:56 localhost sshd[26560]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:56 localhost systemd[26547]: Queued start job for default target Main User Target. Feb 20 02:37:56 localhost systemd[26547]: Created slice User Application Slice. Feb 20 02:37:56 localhost systemd[26547]: Started Mark boot as successful after the user session has run 2 minutes. Feb 20 02:37:56 localhost systemd[26547]: Started Daily Cleanup of User's Temporary Directories. Feb 20 02:37:56 localhost systemd[26547]: Reached target Paths. Feb 20 02:37:56 localhost systemd[26547]: Reached target Timers. Feb 20 02:37:56 localhost systemd[26547]: Starting D-Bus User Message Bus Socket... Feb 20 02:37:56 localhost systemd[26547]: Starting Create User's Volatile Files and Directories... Feb 20 02:37:56 localhost systemd[26547]: Listening on D-Bus User Message Bus Socket. Feb 20 02:37:56 localhost systemd[26547]: Finished Create User's Volatile Files and Directories. Feb 20 02:37:56 localhost systemd[26547]: Reached target Sockets. Feb 20 02:37:56 localhost systemd[26547]: Reached target Basic System. Feb 20 02:37:56 localhost systemd[26547]: Reached target Main User Target. Feb 20 02:37:56 localhost systemd[26547]: Startup finished in 118ms. Feb 20 02:37:56 localhost systemd[1]: Started User Manager for UID 1002. Feb 20 02:37:56 localhost systemd[1]: Started Session 14 of User ceph-admin. Feb 20 02:37:56 localhost systemd-logind[760]: New session 16 of user ceph-admin. Feb 20 02:37:56 localhost systemd[1]: Started Session 16 of User ceph-admin. Feb 20 02:37:56 localhost sshd[26582]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:56 localhost systemd-logind[760]: New session 17 of user ceph-admin. Feb 20 02:37:56 localhost systemd[1]: Started Session 17 of User ceph-admin. Feb 20 02:37:57 localhost sshd[26601]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:57 localhost systemd-logind[760]: New session 18 of user ceph-admin. Feb 20 02:37:57 localhost systemd[1]: Started Session 18 of User ceph-admin. Feb 20 02:37:57 localhost sshd[26620]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:57 localhost systemd-logind[760]: New session 19 of user ceph-admin. Feb 20 02:37:57 localhost systemd[1]: Started Session 19 of User ceph-admin. Feb 20 02:37:57 localhost sshd[26639]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:58 localhost systemd-logind[760]: New session 20 of user ceph-admin. Feb 20 02:37:58 localhost systemd[1]: Started Session 20 of User ceph-admin. Feb 20 02:37:58 localhost sshd[26658]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:58 localhost systemd-logind[760]: New session 21 of user ceph-admin. Feb 20 02:37:58 localhost systemd[1]: Started Session 21 of User ceph-admin. Feb 20 02:37:58 localhost sshd[26677]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:58 localhost systemd-logind[760]: New session 22 of user ceph-admin. Feb 20 02:37:58 localhost systemd[1]: Started Session 22 of User ceph-admin. Feb 20 02:37:59 localhost sshd[26696]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:59 localhost systemd-logind[760]: New session 23 of user ceph-admin. Feb 20 02:37:59 localhost systemd[1]: Started Session 23 of User ceph-admin. Feb 20 02:37:59 localhost sshd[26715]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:37:59 localhost systemd-logind[760]: New session 24 of user ceph-admin. Feb 20 02:37:59 localhost systemd[1]: Started Session 24 of User ceph-admin. Feb 20 02:38:00 localhost sshd[26732]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:38:00 localhost systemd-logind[760]: New session 25 of user ceph-admin. Feb 20 02:38:00 localhost systemd[1]: Started Session 25 of User ceph-admin. Feb 20 02:38:00 localhost sshd[26751]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:38:00 localhost systemd-logind[760]: New session 26 of user ceph-admin. Feb 20 02:38:00 localhost systemd[1]: Started Session 26 of User ceph-admin. Feb 20 02:38:01 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:38:02 localhost sshd[26790]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:38:18 localhost sshd[26792]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:38:25 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:38:25 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:38:26 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:38:26 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:38:26 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26970 (sysctl) Feb 20 02:38:26 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System... Feb 20 02:38:26 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System. Feb 20 02:38:27 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:38:27 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:38:31 localhost kernel: VFS: idmapped mount is not enabled. Feb 20 02:38:34 localhost sshd[27220]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:38:40 localhost sshd[27223]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:38:50 localhost podman[27108]: Feb 20 02:38:50 localhost podman[27108]: 2026-02-20 07:38:50.836694334 +0000 UTC m=+23.004598836 container create 01c704199530353d03540ec77c352bd9b831ab77ff5ac1917e44e52216332f8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pasteur, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-type=git, io.openshift.tags=rhceph ceph, release=1770267347, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 02:38:50 localhost systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck3615766158-merged.mount: Deactivated successfully. Feb 20 02:38:50 localhost podman[27108]: 2026-02-20 07:38:27.874900042 +0000 UTC m=+0.042804564 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:38:50 localhost systemd[1]: Created slice Slice /machine. Feb 20 02:38:50 localhost systemd[1]: Started libpod-conmon-01c704199530353d03540ec77c352bd9b831ab77ff5ac1917e44e52216332f8f.scope. Feb 20 02:38:50 localhost systemd[1]: Started libcrun container. Feb 20 02:38:50 localhost podman[27108]: 2026-02-20 07:38:50.983641122 +0000 UTC m=+23.151545644 container init 01c704199530353d03540ec77c352bd9b831ab77ff5ac1917e44e52216332f8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pasteur, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, release=1770267347, vcs-type=git, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, version=7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 02:38:50 localhost podman[27108]: 2026-02-20 07:38:50.995645848 +0000 UTC m=+23.163550370 container start 01c704199530353d03540ec77c352bd9b831ab77ff5ac1917e44e52216332f8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pasteur, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True) Feb 20 02:38:50 localhost podman[27108]: 2026-02-20 07:38:50.995821092 +0000 UTC m=+23.163725614 container attach 01c704199530353d03540ec77c352bd9b831ab77ff5ac1917e44e52216332f8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pasteur, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_BRANCH=main, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Feb 20 02:38:50 localhost friendly_pasteur[27238]: 167 167 Feb 20 02:38:51 localhost systemd[1]: libpod-01c704199530353d03540ec77c352bd9b831ab77ff5ac1917e44e52216332f8f.scope: Deactivated successfully. Feb 20 02:38:51 localhost podman[27108]: 2026-02-20 07:38:51.001638461 +0000 UTC m=+23.169542993 container died 01c704199530353d03540ec77c352bd9b831ab77ff5ac1917e44e52216332f8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pasteur, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, description=Red Hat Ceph Storage 7, release=1770267347, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 02:38:51 localhost podman[27243]: 2026-02-20 07:38:51.088120645 +0000 UTC m=+0.078249248 container remove 01c704199530353d03540ec77c352bd9b831ab77ff5ac1917e44e52216332f8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pasteur, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , release=1770267347, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Feb 20 02:38:51 localhost systemd[1]: libpod-conmon-01c704199530353d03540ec77c352bd9b831ab77ff5ac1917e44e52216332f8f.scope: Deactivated successfully. Feb 20 02:38:51 localhost podman[27264]: Feb 20 02:38:51 localhost podman[27264]: 2026-02-20 07:38:51.320190834 +0000 UTC m=+0.075391850 container create 3b2951e9f686bf77f7f5472e575a59d25d140f061ee83d03287da9e4bffa7c72 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_lamport, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., name=rhceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main) Feb 20 02:38:51 localhost systemd[1]: Started libpod-conmon-3b2951e9f686bf77f7f5472e575a59d25d140f061ee83d03287da9e4bffa7c72.scope. Feb 20 02:38:51 localhost systemd[1]: Started libcrun container. Feb 20 02:38:51 localhost podman[27264]: 2026-02-20 07:38:51.288716243 +0000 UTC m=+0.043917319 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:38:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ed9db3c353feefc0c69fb600beb9bdbda3f6230d976b5065b2dacc28232586/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 02:38:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ed9db3c353feefc0c69fb600beb9bdbda3f6230d976b5065b2dacc28232586/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 02:38:51 localhost podman[27264]: 2026-02-20 07:38:51.407753334 +0000 UTC m=+0.162954350 container init 3b2951e9f686bf77f7f5472e575a59d25d140f061ee83d03287da9e4bffa7c72 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_lamport, GIT_CLEAN=True, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.buildah.version=1.42.2, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 02:38:51 localhost podman[27264]: 2026-02-20 07:38:51.417313012 +0000 UTC m=+0.172514018 container start 3b2951e9f686bf77f7f5472e575a59d25d140f061ee83d03287da9e4bffa7c72 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_lamport, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, release=1770267347, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True) Feb 20 02:38:51 localhost podman[27264]: 2026-02-20 07:38:51.41762079 +0000 UTC m=+0.172821796 container attach 3b2951e9f686bf77f7f5472e575a59d25d140f061ee83d03287da9e4bffa7c72 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_lamport, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, release=1770267347, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, architecture=x86_64, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 02:38:51 localhost systemd[1]: var-lib-containers-storage-overlay-5bcfc406a5ea68b98b28e80ab3553c35be195cfcea6021938bb8e4a4f7254ca0-merged.mount: Deactivated successfully. Feb 20 02:38:52 localhost kind_lamport[27279]: [ Feb 20 02:38:52 localhost kind_lamport[27279]: { Feb 20 02:38:52 localhost kind_lamport[27279]: "available": false, Feb 20 02:38:52 localhost kind_lamport[27279]: "ceph_device": false, Feb 20 02:38:52 localhost kind_lamport[27279]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 20 02:38:52 localhost kind_lamport[27279]: "lsm_data": {}, Feb 20 02:38:52 localhost kind_lamport[27279]: "lvs": [], Feb 20 02:38:52 localhost kind_lamport[27279]: "path": "/dev/sr0", Feb 20 02:38:52 localhost kind_lamport[27279]: "rejected_reasons": [ Feb 20 02:38:52 localhost kind_lamport[27279]: "Insufficient space (<5GB)", Feb 20 02:38:52 localhost kind_lamport[27279]: "Has a FileSystem" Feb 20 02:38:52 localhost kind_lamport[27279]: ], Feb 20 02:38:52 localhost kind_lamport[27279]: "sys_api": { Feb 20 02:38:52 localhost kind_lamport[27279]: "actuators": null, Feb 20 02:38:52 localhost kind_lamport[27279]: "device_nodes": "sr0", Feb 20 02:38:52 localhost kind_lamport[27279]: "human_readable_size": "482.00 KB", Feb 20 02:38:52 localhost kind_lamport[27279]: "id_bus": "ata", Feb 20 02:38:52 localhost kind_lamport[27279]: "model": "QEMU DVD-ROM", Feb 20 02:38:52 localhost kind_lamport[27279]: "nr_requests": "2", Feb 20 02:38:52 localhost kind_lamport[27279]: "partitions": {}, Feb 20 02:38:52 localhost kind_lamport[27279]: "path": "/dev/sr0", Feb 20 02:38:52 localhost kind_lamport[27279]: "removable": "1", Feb 20 02:38:52 localhost kind_lamport[27279]: "rev": "2.5+", Feb 20 02:38:52 localhost kind_lamport[27279]: "ro": "0", Feb 20 02:38:52 localhost kind_lamport[27279]: "rotational": "1", Feb 20 02:38:52 localhost kind_lamport[27279]: "sas_address": "", Feb 20 02:38:52 localhost kind_lamport[27279]: "sas_device_handle": "", Feb 20 02:38:52 localhost kind_lamport[27279]: "scheduler_mode": "mq-deadline", Feb 20 02:38:52 localhost kind_lamport[27279]: "sectors": 0, Feb 20 02:38:52 localhost kind_lamport[27279]: "sectorsize": "2048", Feb 20 02:38:52 localhost kind_lamport[27279]: "size": 493568.0, Feb 20 02:38:52 localhost kind_lamport[27279]: "support_discard": "0", Feb 20 02:38:52 localhost kind_lamport[27279]: "type": "disk", Feb 20 02:38:52 localhost kind_lamport[27279]: "vendor": "QEMU" Feb 20 02:38:52 localhost kind_lamport[27279]: } Feb 20 02:38:52 localhost kind_lamport[27279]: } Feb 20 02:38:52 localhost kind_lamport[27279]: ] Feb 20 02:38:52 localhost systemd[1]: libpod-3b2951e9f686bf77f7f5472e575a59d25d140f061ee83d03287da9e4bffa7c72.scope: Deactivated successfully. Feb 20 02:38:52 localhost podman[27264]: 2026-02-20 07:38:52.251404028 +0000 UTC m=+1.006605064 container died 3b2951e9f686bf77f7f5472e575a59d25d140f061ee83d03287da9e4bffa7c72 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_lamport, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 02:38:52 localhost systemd[1]: tmp-crun.wNen6v.mount: Deactivated successfully. Feb 20 02:38:52 localhost systemd[1]: var-lib-containers-storage-overlay-f6ed9db3c353feefc0c69fb600beb9bdbda3f6230d976b5065b2dacc28232586-merged.mount: Deactivated successfully. Feb 20 02:38:52 localhost podman[28504]: 2026-02-20 07:38:52.327853244 +0000 UTC m=+0.065887204 container remove 3b2951e9f686bf77f7f5472e575a59d25d140f061ee83d03287da9e4bffa7c72 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_lamport, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, ceph=True, io.buildah.version=1.42.2, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, distribution-scope=public, release=1770267347, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph) Feb 20 02:38:52 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:38:52 localhost systemd[1]: libpod-conmon-3b2951e9f686bf77f7f5472e575a59d25d140f061ee83d03287da9e4bffa7c72.scope: Deactivated successfully. Feb 20 02:38:52 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:38:52 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully. Feb 20 02:38:52 localhost systemd[1]: Closed Process Core Dump Socket. Feb 20 02:38:52 localhost systemd[1]: Stopping Process Core Dump Socket... Feb 20 02:38:52 localhost systemd[1]: Listening on Process Core Dump Socket. Feb 20 02:38:52 localhost systemd[1]: Reloading. Feb 20 02:38:52 localhost systemd-rc-local-generator[28586]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:38:52 localhost systemd-sysv-generator[28592]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:38:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:38:53 localhost systemd[1]: Reloading. Feb 20 02:38:53 localhost systemd-sysv-generator[28628]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:38:53 localhost systemd-rc-local-generator[28624]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:38:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:39:06 localhost sshd[28637]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:39:17 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:39:17 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:39:17 localhost podman[28737]: Feb 20 02:39:17 localhost podman[28737]: 2026-02-20 07:39:17.573947017 +0000 UTC m=+0.038261155 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:20 localhost podman[28737]: 2026-02-20 07:39:20.920232977 +0000 UTC m=+3.384547145 container create 9fe60deacd51c106cadf6d4b466c3a00cc0d0c79f37c18681a1658961d2fb809 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_germain, maintainer=Guillaume Abrioux , name=rhceph, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, description=Red Hat Ceph Storage 7) Feb 20 02:39:21 localhost systemd[1]: Started libpod-conmon-9fe60deacd51c106cadf6d4b466c3a00cc0d0c79f37c18681a1658961d2fb809.scope. Feb 20 02:39:21 localhost systemd[1]: Started libcrun container. Feb 20 02:39:21 localhost podman[28737]: 2026-02-20 07:39:21.204440942 +0000 UTC m=+3.668755110 container init 9fe60deacd51c106cadf6d4b466c3a00cc0d0c79f37c18681a1658961d2fb809 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_germain, name=rhceph, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, RELEASE=main, CEPH_POINT_RELEASE=) Feb 20 02:39:21 localhost podman[28737]: 2026-02-20 07:39:21.215809623 +0000 UTC m=+3.680123801 container start 9fe60deacd51c106cadf6d4b466c3a00cc0d0c79f37c18681a1658961d2fb809 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_germain, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=1770267347, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, name=rhceph, ceph=True, GIT_BRANCH=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 02:39:21 localhost podman[28737]: 2026-02-20 07:39:21.216076526 +0000 UTC m=+3.680390734 container attach 9fe60deacd51c106cadf6d4b466c3a00cc0d0c79f37c18681a1658961d2fb809 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_germain, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.42.2, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux , name=rhceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 02:39:21 localhost adoring_germain[28980]: 167 167 Feb 20 02:39:21 localhost systemd[1]: libpod-9fe60deacd51c106cadf6d4b466c3a00cc0d0c79f37c18681a1658961d2fb809.scope: Deactivated successfully. Feb 20 02:39:21 localhost podman[28737]: 2026-02-20 07:39:21.218987104 +0000 UTC m=+3.683301302 container died 9fe60deacd51c106cadf6d4b466c3a00cc0d0c79f37c18681a1658961d2fb809 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_germain, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, version=7, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 02:39:21 localhost systemd[1]: var-lib-containers-storage-overlay-14769a5795c1789093bad56b177f517e5ccb8f9af32b01624155edce8422d6de-merged.mount: Deactivated successfully. Feb 20 02:39:21 localhost podman[28985]: 2026-02-20 07:39:21.305401837 +0000 UTC m=+0.077698949 container remove 9fe60deacd51c106cadf6d4b466c3a00cc0d0c79f37c18681a1658961d2fb809 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_germain, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, ceph=True, RELEASE=main, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=1770267347, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 02:39:21 localhost systemd[1]: libpod-conmon-9fe60deacd51c106cadf6d4b466c3a00cc0d0c79f37c18681a1658961d2fb809.scope: Deactivated successfully. Feb 20 02:39:21 localhost systemd[1]: Reloading. Feb 20 02:39:21 localhost systemd-rc-local-generator[29025]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:39:21 localhost systemd-sysv-generator[29028]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:39:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:39:21 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:39:21 localhost systemd[1]: Reloading. Feb 20 02:39:21 localhost systemd-sysv-generator[29068]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:39:21 localhost systemd-rc-local-generator[29060]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:39:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:39:21 localhost systemd[1]: Reached target All Ceph clusters and services. Feb 20 02:39:21 localhost systemd[1]: Reloading. Feb 20 02:39:21 localhost systemd-rc-local-generator[29103]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:39:21 localhost systemd-sysv-generator[29107]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:39:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:39:22 localhost systemd[1]: Reached target Ceph cluster a8557ee9-b55d-5519-942c-cf8f6172f1d8. Feb 20 02:39:22 localhost systemd[1]: Reloading. Feb 20 02:39:22 localhost systemd-rc-local-generator[29138]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:39:22 localhost systemd-sysv-generator[29145]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:39:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:39:22 localhost systemd[1]: Reloading. Feb 20 02:39:22 localhost systemd-sysv-generator[29187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:39:22 localhost systemd-rc-local-generator[29183]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:39:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:39:22 localhost systemd[1]: Created slice Slice /system/ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8. Feb 20 02:39:22 localhost systemd[1]: Reached target System Time Set. Feb 20 02:39:22 localhost systemd[1]: Reached target System Time Synchronized. Feb 20 02:39:22 localhost systemd[1]: Starting Ceph crash.np0005625202 for a8557ee9-b55d-5519-942c-cf8f6172f1d8... Feb 20 02:39:22 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:39:22 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Feb 20 02:39:22 localhost podman[29246]: Feb 20 02:39:22 localhost podman[29246]: 2026-02-20 07:39:22.905806688 +0000 UTC m=+0.083574209 container create 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7) Feb 20 02:39:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026ed7af2d8ac00c1bce49b1b9c1aee73b91062baffbb013f1379e60c5befe86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026ed7af2d8ac00c1bce49b1b9c1aee73b91062baffbb013f1379e60c5befe86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:22 localhost podman[29246]: 2026-02-20 07:39:22.86657084 +0000 UTC m=+0.044338421 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026ed7af2d8ac00c1bce49b1b9c1aee73b91062baffbb013f1379e60c5befe86/merged/etc/ceph/ceph.client.crash.np0005625202.keyring supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:22 localhost podman[29246]: 2026-02-20 07:39:22.985542692 +0000 UTC m=+0.163310253 container init 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, RELEASE=main, version=7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vendor=Red Hat, Inc., release=1770267347, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2) Feb 20 02:39:22 localhost podman[29246]: 2026-02-20 07:39:22.994260407 +0000 UTC m=+0.172027968 container start 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 02:39:22 localhost bash[29246]: 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 Feb 20 02:39:23 localhost systemd[1]: Started Ceph crash.np0005625202 for a8557ee9-b55d-5519-942c-cf8f6172f1d8. Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: INFO:ceph-crash:pinging cluster to exercise our key, trying key client.crash.np0005625202. Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: cluster: Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: id: a8557ee9-b55d-5519-942c-cf8f6172f1d8 Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: health: HEALTH_WARN Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: OSD count 0 < osd_pool_default_size 3 Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: services: Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: mon: 3 daemons, quorum np0005625199,np0005625201,np0005625200 (age 8s) Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: mgr: np0005625199.ileebh(active, since 2m), standbys: np0005625201.mtnyvu, np0005625200.ypbkax Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: osd: 0 osds: 0 up, 0 in Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: data: Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: pools: 0 pools, 0 pgs Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: objects: 0 objects, 0 B Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: usage: 0 B used, 0 B / 0 B avail Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: pgs: Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: progress: Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: Updating crash deployment (+4 -> 6) (0s) Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: [............................] Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: Feb 20 02:39:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202[29260]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s Feb 20 02:39:24 localhost sshd[29288]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:39:31 localhost podman[29358]: Feb 20 02:39:31 localhost podman[29358]: 2026-02-20 07:39:31.499667327 +0000 UTC m=+0.071968277 container create d794745af2c0e43b4ccc9d68e4c1d791619f695263f19afea9bff30623686164 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_archimedes, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, release=1770267347, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main) Feb 20 02:39:31 localhost systemd[1]: Started libpod-conmon-d794745af2c0e43b4ccc9d68e4c1d791619f695263f19afea9bff30623686164.scope. Feb 20 02:39:31 localhost systemd[1]: Started libcrun container. Feb 20 02:39:31 localhost podman[29358]: 2026-02-20 07:39:31.470234626 +0000 UTC m=+0.042535576 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:31 localhost podman[29358]: 2026-02-20 07:39:31.570935229 +0000 UTC m=+0.143236189 container init d794745af2c0e43b4ccc9d68e4c1d791619f695263f19afea9bff30623686164 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_archimedes, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.buildah.version=1.42.2, release=1770267347, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 02:39:31 localhost systemd[1]: tmp-crun.t3YbEh.mount: Deactivated successfully. Feb 20 02:39:31 localhost podman[29358]: 2026-02-20 07:39:31.582163973 +0000 UTC m=+0.154464923 container start d794745af2c0e43b4ccc9d68e4c1d791619f695263f19afea9bff30623686164 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_archimedes, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, ceph=True, vendor=Red Hat, Inc.) Feb 20 02:39:31 localhost podman[29358]: 2026-02-20 07:39:31.582507489 +0000 UTC m=+0.154808499 container attach d794745af2c0e43b4ccc9d68e4c1d791619f695263f19afea9bff30623686164 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_archimedes, GIT_CLEAN=True, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, name=rhceph, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, release=1770267347, io.buildah.version=1.42.2, version=7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=) Feb 20 02:39:31 localhost stoic_archimedes[29373]: 167 167 Feb 20 02:39:31 localhost systemd[1]: libpod-d794745af2c0e43b4ccc9d68e4c1d791619f695263f19afea9bff30623686164.scope: Deactivated successfully. Feb 20 02:39:31 localhost podman[29358]: 2026-02-20 07:39:31.585731102 +0000 UTC m=+0.158032072 container died d794745af2c0e43b4ccc9d68e4c1d791619f695263f19afea9bff30623686164 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_archimedes, ceph=True, vcs-type=git, io.buildah.version=1.42.2, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 02:39:31 localhost podman[29378]: 2026-02-20 07:39:31.670737939 +0000 UTC m=+0.076200939 container remove d794745af2c0e43b4ccc9d68e4c1d791619f695263f19afea9bff30623686164 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_archimedes, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, release=1770267347, distribution-scope=public, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-type=git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2) Feb 20 02:39:31 localhost systemd[1]: libpod-conmon-d794745af2c0e43b4ccc9d68e4c1d791619f695263f19afea9bff30623686164.scope: Deactivated successfully. Feb 20 02:39:31 localhost podman[29398]: Feb 20 02:39:31 localhost podman[29398]: 2026-02-20 07:39:31.877295509 +0000 UTC m=+0.070810691 container create 83ed1dad39f4723448c5ed742ffd945d55dc71521c7a29f81b31c974158b17ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_satoshi, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , architecture=x86_64) Feb 20 02:39:31 localhost systemd[1]: Started libpod-conmon-83ed1dad39f4723448c5ed742ffd945d55dc71521c7a29f81b31c974158b17ff.scope. Feb 20 02:39:31 localhost systemd[1]: Started libcrun container. Feb 20 02:39:31 localhost podman[29398]: 2026-02-20 07:39:31.84851785 +0000 UTC m=+0.042033092 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fcde0b61d5af6b69470cb5c3717f5aa78f9384b57a8d305f81f5f53a0d5d934/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fcde0b61d5af6b69470cb5c3717f5aa78f9384b57a8d305f81f5f53a0d5d934/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fcde0b61d5af6b69470cb5c3717f5aa78f9384b57a8d305f81f5f53a0d5d934/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fcde0b61d5af6b69470cb5c3717f5aa78f9384b57a8d305f81f5f53a0d5d934/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fcde0b61d5af6b69470cb5c3717f5aa78f9384b57a8d305f81f5f53a0d5d934/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:32 localhost podman[29398]: 2026-02-20 07:39:32.001130454 +0000 UTC m=+0.194645666 container init 83ed1dad39f4723448c5ed742ffd945d55dc71521c7a29f81b31c974158b17ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_satoshi, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, GIT_CLEAN=True) Feb 20 02:39:32 localhost podman[29398]: 2026-02-20 07:39:32.011369931 +0000 UTC m=+0.204885123 container start 83ed1dad39f4723448c5ed742ffd945d55dc71521c7a29f81b31c974158b17ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_satoshi, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, version=7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , architecture=x86_64) Feb 20 02:39:32 localhost podman[29398]: 2026-02-20 07:39:32.011586341 +0000 UTC m=+0.205101523 container attach 83ed1dad39f4723448c5ed742ffd945d55dc71521c7a29f81b31c974158b17ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_satoshi, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, name=rhceph, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_CLEAN=True, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 02:39:32 localhost awesome_satoshi[29412]: --> passed data devices: 0 physical, 2 LVM Feb 20 02:39:32 localhost awesome_satoshi[29412]: --> relative data size: 1.0 Feb 20 02:39:32 localhost systemd[1]: var-lib-containers-storage-overlay-5038245cb509a8551f72407b484c6d0a24a1cff9589d4873bfeeeeb747bd1638-merged.mount: Deactivated successfully. Feb 20 02:39:32 localhost awesome_satoshi[29412]: Running command: /usr/bin/ceph-authtool --gen-print-key Feb 20 02:39:32 localhost awesome_satoshi[29412]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new be635a35-706a-471d-ae03-188a9acf1be1 Feb 20 02:39:33 localhost awesome_satoshi[29412]: Running command: /usr/bin/ceph-authtool --gen-print-key Feb 20 02:39:33 localhost lvm[29466]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 20 02:39:33 localhost lvm[29466]: VG ceph_vg0 finished Feb 20 02:39:33 localhost awesome_satoshi[29412]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2 Feb 20 02:39:33 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0 Feb 20 02:39:33 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Feb 20 02:39:33 localhost awesome_satoshi[29412]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block Feb 20 02:39:33 localhost awesome_satoshi[29412]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap Feb 20 02:39:33 localhost awesome_satoshi[29412]: stderr: got monmap epoch 3 Feb 20 02:39:33 localhost awesome_satoshi[29412]: --> Creating keyring file for osd.2 Feb 20 02:39:33 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring Feb 20 02:39:33 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/ Feb 20 02:39:33 localhost awesome_satoshi[29412]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid be635a35-706a-471d-ae03-188a9acf1be1 --setuser ceph --setgroup ceph Feb 20 02:39:36 localhost awesome_satoshi[29412]: stderr: 2026-02-20T07:39:33.682+0000 7fbbbf684a80 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Feb 20 02:39:36 localhost awesome_satoshi[29412]: stderr: 2026-02-20T07:39:33.682+0000 7fbbbf684a80 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid Feb 20 02:39:36 localhost awesome_satoshi[29412]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0 Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Feb 20 02:39:36 localhost awesome_satoshi[29412]: --> ceph-volume lvm activate successful for osd ID: 2 Feb 20 02:39:36 localhost awesome_satoshi[29412]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0 Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/ceph-authtool --gen-print-key Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8337af89-0ad5-4d35-920a-9f8a5f21920c Feb 20 02:39:36 localhost lvm[30399]: PV /dev/loop4 online, VG ceph_vg1 is complete. Feb 20 02:39:36 localhost lvm[30399]: VG ceph_vg1 finished Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/ceph-authtool --gen-print-key Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-5 Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1 Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-5/block Feb 20 02:39:36 localhost awesome_satoshi[29412]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-5/activate.monmap Feb 20 02:39:37 localhost awesome_satoshi[29412]: stderr: got monmap epoch 3 Feb 20 02:39:37 localhost awesome_satoshi[29412]: --> Creating keyring file for osd.5 Feb 20 02:39:37 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/keyring Feb 20 02:39:37 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/ Feb 20 02:39:37 localhost awesome_satoshi[29412]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 5 --monmap /var/lib/ceph/osd/ceph-5/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-5/ --osd-uuid 8337af89-0ad5-4d35-920a-9f8a5f21920c --setuser ceph --setgroup ceph Feb 20 02:39:39 localhost sshd[31214]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:39:39 localhost awesome_satoshi[29412]: stderr: 2026-02-20T07:39:37.456+0000 7fc35ba3fa80 -1 bluestore(/var/lib/ceph/osd/ceph-5//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Feb 20 02:39:39 localhost awesome_satoshi[29412]: stderr: 2026-02-20T07:39:37.456+0000 7fc35ba3fa80 -1 bluestore(/var/lib/ceph/osd/ceph-5/) _read_fsid unparsable uuid Feb 20 02:39:39 localhost awesome_satoshi[29412]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1 Feb 20 02:39:39 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Feb 20 02:39:39 localhost awesome_satoshi[29412]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-5 --no-mon-config Feb 20 02:39:40 localhost awesome_satoshi[29412]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-5/block Feb 20 02:39:40 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block Feb 20 02:39:40 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Feb 20 02:39:40 localhost awesome_satoshi[29412]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Feb 20 02:39:40 localhost awesome_satoshi[29412]: --> ceph-volume lvm activate successful for osd ID: 5 Feb 20 02:39:40 localhost awesome_satoshi[29412]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1 Feb 20 02:39:40 localhost systemd[1]: libpod-83ed1dad39f4723448c5ed742ffd945d55dc71521c7a29f81b31c974158b17ff.scope: Deactivated successfully. Feb 20 02:39:40 localhost systemd[1]: libpod-83ed1dad39f4723448c5ed742ffd945d55dc71521c7a29f81b31c974158b17ff.scope: Consumed 3.695s CPU time. Feb 20 02:39:40 localhost podman[29398]: 2026-02-20 07:39:40.061907404 +0000 UTC m=+8.255422566 container died 83ed1dad39f4723448c5ed742ffd945d55dc71521c7a29f81b31c974158b17ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_satoshi, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.buildah.version=1.42.2, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, release=1770267347, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 02:39:40 localhost systemd[1]: var-lib-containers-storage-overlay-8fcde0b61d5af6b69470cb5c3717f5aa78f9384b57a8d305f81f5f53a0d5d934-merged.mount: Deactivated successfully. Feb 20 02:39:40 localhost podman[31299]: 2026-02-20 07:39:40.161138576 +0000 UTC m=+0.086313578 container remove 83ed1dad39f4723448c5ed742ffd945d55dc71521c7a29f81b31c974158b17ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_satoshi, GIT_BRANCH=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph) Feb 20 02:39:40 localhost systemd[1]: libpod-conmon-83ed1dad39f4723448c5ed742ffd945d55dc71521c7a29f81b31c974158b17ff.scope: Deactivated successfully. Feb 20 02:39:40 localhost podman[31385]: Feb 20 02:39:40 localhost podman[31385]: 2026-02-20 07:39:40.91318384 +0000 UTC m=+0.072210108 container create cd5508a88bec8e80dabe5c64cefd62b8b3c15a2f17e3c84f47bbea4cd90762ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_agnesi, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 02:39:40 localhost systemd[1]: Started libpod-conmon-cd5508a88bec8e80dabe5c64cefd62b8b3c15a2f17e3c84f47bbea4cd90762ce.scope. Feb 20 02:39:40 localhost systemd[1]: Started libcrun container. Feb 20 02:39:40 localhost podman[31385]: 2026-02-20 07:39:40.883421934 +0000 UTC m=+0.042448202 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:40 localhost podman[31385]: 2026-02-20 07:39:40.98378444 +0000 UTC m=+0.142810748 container init cd5508a88bec8e80dabe5c64cefd62b8b3c15a2f17e3c84f47bbea4cd90762ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_agnesi, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, RELEASE=main, GIT_BRANCH=main, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, build-date=2026-02-09T10:25:24Z, release=1770267347, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 02:39:40 localhost podman[31385]: 2026-02-20 07:39:40.993972646 +0000 UTC m=+0.152998914 container start cd5508a88bec8e80dabe5c64cefd62b8b3c15a2f17e3c84f47bbea4cd90762ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_agnesi, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph) Feb 20 02:39:40 localhost podman[31385]: 2026-02-20 07:39:40.994257029 +0000 UTC m=+0.153283337 container attach cd5508a88bec8e80dabe5c64cefd62b8b3c15a2f17e3c84f47bbea4cd90762ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_agnesi, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, ceph=True, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 02:39:40 localhost sad_agnesi[31400]: 167 167 Feb 20 02:39:40 localhost systemd[1]: libpod-cd5508a88bec8e80dabe5c64cefd62b8b3c15a2f17e3c84f47bbea4cd90762ce.scope: Deactivated successfully. Feb 20 02:39:40 localhost podman[31385]: 2026-02-20 07:39:40.996926846 +0000 UTC m=+0.155953144 container died cd5508a88bec8e80dabe5c64cefd62b8b3c15a2f17e3c84f47bbea4cd90762ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_agnesi, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, vcs-type=git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=rhceph-container) Feb 20 02:39:41 localhost podman[31405]: 2026-02-20 07:39:41.064702922 +0000 UTC m=+0.062025833 container remove cd5508a88bec8e80dabe5c64cefd62b8b3c15a2f17e3c84f47bbea4cd90762ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_agnesi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_BRANCH=main, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1770267347, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 02:39:41 localhost systemd[1]: libpod-conmon-cd5508a88bec8e80dabe5c64cefd62b8b3c15a2f17e3c84f47bbea4cd90762ce.scope: Deactivated successfully. Feb 20 02:39:41 localhost systemd[1]: var-lib-containers-storage-overlay-c59fb4979e1bc1f4ae8a492f34ceade7ae41b07fe442c16e11fa0689bad26d21-merged.mount: Deactivated successfully. Feb 20 02:39:41 localhost podman[31426]: Feb 20 02:39:41 localhost podman[31426]: 2026-02-20 07:39:41.276358345 +0000 UTC m=+0.077328271 container create 682b0edf45a1726c10f8f2d148b0e94bd2a499403668f19d1ee408a1dbb7e7f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_ride, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, name=rhceph, CEPH_POINT_RELEASE=, version=7, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 02:39:41 localhost podman[31426]: 2026-02-20 07:39:41.245660684 +0000 UTC m=+0.046630650 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:41 localhost systemd[1]: Started libpod-conmon-682b0edf45a1726c10f8f2d148b0e94bd2a499403668f19d1ee408a1dbb7e7f8.scope. Feb 20 02:39:41 localhost systemd[1]: Started libcrun container. Feb 20 02:39:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca765841b86c4f9f2a750ee357e955ff223fa636421ce710336477e8c673ede/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca765841b86c4f9f2a750ee357e955ff223fa636421ce710336477e8c673ede/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca765841b86c4f9f2a750ee357e955ff223fa636421ce710336477e8c673ede/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:41 localhost podman[31426]: 2026-02-20 07:39:41.416119067 +0000 UTC m=+0.217088993 container init 682b0edf45a1726c10f8f2d148b0e94bd2a499403668f19d1ee408a1dbb7e7f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_ride, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, name=rhceph, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2) Feb 20 02:39:41 localhost podman[31426]: 2026-02-20 07:39:41.428863474 +0000 UTC m=+0.229833390 container start 682b0edf45a1726c10f8f2d148b0e94bd2a499403668f19d1ee408a1dbb7e7f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_ride, ceph=True, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-type=git, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., name=rhceph) Feb 20 02:39:41 localhost podman[31426]: 2026-02-20 07:39:41.429239892 +0000 UTC m=+0.230209858 container attach 682b0edf45a1726c10f8f2d148b0e94bd2a499403668f19d1ee408a1dbb7e7f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_ride, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1770267347) Feb 20 02:39:41 localhost elastic_ride[31441]: { Feb 20 02:39:41 localhost elastic_ride[31441]: "2": [ Feb 20 02:39:41 localhost elastic_ride[31441]: { Feb 20 02:39:41 localhost elastic_ride[31441]: "devices": [ Feb 20 02:39:41 localhost elastic_ride[31441]: "/dev/loop3" Feb 20 02:39:41 localhost elastic_ride[31441]: ], Feb 20 02:39:41 localhost elastic_ride[31441]: "lv_name": "ceph_lv0", Feb 20 02:39:41 localhost elastic_ride[31441]: "lv_path": "/dev/ceph_vg0/ceph_lv0", Feb 20 02:39:41 localhost elastic_ride[31441]: "lv_size": "7511998464", Feb 20 02:39:41 localhost elastic_ride[31441]: "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sNZSAD-hQzp-fmHd-DmQZ-OV7W-jyNQ-g1Zfpy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a8557ee9-b55d-5519-942c-cf8f6172f1d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=be635a35-706a-471d-ae03-188a9acf1be1,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Feb 20 02:39:41 localhost elastic_ride[31441]: "lv_uuid": "sNZSAD-hQzp-fmHd-DmQZ-OV7W-jyNQ-g1Zfpy", Feb 20 02:39:41 localhost elastic_ride[31441]: "name": "ceph_lv0", Feb 20 02:39:41 localhost elastic_ride[31441]: "path": "/dev/ceph_vg0/ceph_lv0", Feb 20 02:39:41 localhost elastic_ride[31441]: "tags": { Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.block_device": "/dev/ceph_vg0/ceph_lv0", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.block_uuid": "sNZSAD-hQzp-fmHd-DmQZ-OV7W-jyNQ-g1Zfpy", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.cephx_lockbox_secret": "", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.cluster_fsid": "a8557ee9-b55d-5519-942c-cf8f6172f1d8", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.cluster_name": "ceph", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.crush_device_class": "", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.encrypted": "0", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.osd_fsid": "be635a35-706a-471d-ae03-188a9acf1be1", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.osd_id": "2", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.osdspec_affinity": "default_drive_group", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.type": "block", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.vdo": "0" Feb 20 02:39:41 localhost elastic_ride[31441]: }, Feb 20 02:39:41 localhost elastic_ride[31441]: "type": "block", Feb 20 02:39:41 localhost elastic_ride[31441]: "vg_name": "ceph_vg0" Feb 20 02:39:41 localhost elastic_ride[31441]: } Feb 20 02:39:41 localhost elastic_ride[31441]: ], Feb 20 02:39:41 localhost elastic_ride[31441]: "5": [ Feb 20 02:39:41 localhost elastic_ride[31441]: { Feb 20 02:39:41 localhost elastic_ride[31441]: "devices": [ Feb 20 02:39:41 localhost elastic_ride[31441]: "/dev/loop4" Feb 20 02:39:41 localhost elastic_ride[31441]: ], Feb 20 02:39:41 localhost elastic_ride[31441]: "lv_name": "ceph_lv1", Feb 20 02:39:41 localhost elastic_ride[31441]: "lv_path": "/dev/ceph_vg1/ceph_lv1", Feb 20 02:39:41 localhost elastic_ride[31441]: "lv_size": "7511998464", Feb 20 02:39:41 localhost elastic_ride[31441]: "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LVDIZh-QGc3-TLtU-BGcm-vQY6-HDje-3A7wNw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a8557ee9-b55d-5519-942c-cf8f6172f1d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8337af89-0ad5-4d35-920a-9f8a5f21920c,ceph.osd_id=5,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Feb 20 02:39:41 localhost elastic_ride[31441]: "lv_uuid": "LVDIZh-QGc3-TLtU-BGcm-vQY6-HDje-3A7wNw", Feb 20 02:39:41 localhost elastic_ride[31441]: "name": "ceph_lv1", Feb 20 02:39:41 localhost elastic_ride[31441]: "path": "/dev/ceph_vg1/ceph_lv1", Feb 20 02:39:41 localhost elastic_ride[31441]: "tags": { Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.block_device": "/dev/ceph_vg1/ceph_lv1", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.block_uuid": "LVDIZh-QGc3-TLtU-BGcm-vQY6-HDje-3A7wNw", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.cephx_lockbox_secret": "", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.cluster_fsid": "a8557ee9-b55d-5519-942c-cf8f6172f1d8", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.cluster_name": "ceph", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.crush_device_class": "", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.encrypted": "0", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.osd_fsid": "8337af89-0ad5-4d35-920a-9f8a5f21920c", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.osd_id": "5", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.osdspec_affinity": "default_drive_group", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.type": "block", Feb 20 02:39:41 localhost elastic_ride[31441]: "ceph.vdo": "0" Feb 20 02:39:41 localhost elastic_ride[31441]: }, Feb 20 02:39:41 localhost elastic_ride[31441]: "type": "block", Feb 20 02:39:41 localhost elastic_ride[31441]: "vg_name": "ceph_vg1" Feb 20 02:39:41 localhost elastic_ride[31441]: } Feb 20 02:39:41 localhost elastic_ride[31441]: ] Feb 20 02:39:41 localhost elastic_ride[31441]: } Feb 20 02:39:41 localhost systemd[1]: libpod-682b0edf45a1726c10f8f2d148b0e94bd2a499403668f19d1ee408a1dbb7e7f8.scope: Deactivated successfully. Feb 20 02:39:41 localhost podman[31426]: 2026-02-20 07:39:41.780967713 +0000 UTC m=+0.581937679 container died 682b0edf45a1726c10f8f2d148b0e94bd2a499403668f19d1ee408a1dbb7e7f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_ride, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, vendor=Red Hat, Inc., release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, architecture=x86_64, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True) Feb 20 02:39:41 localhost podman[31450]: 2026-02-20 07:39:41.873040334 +0000 UTC m=+0.081891058 container remove 682b0edf45a1726c10f8f2d148b0e94bd2a499403668f19d1ee408a1dbb7e7f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_ride, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, RELEASE=main, GIT_BRANCH=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux ) Feb 20 02:39:41 localhost systemd[1]: libpod-conmon-682b0edf45a1726c10f8f2d148b0e94bd2a499403668f19d1ee408a1dbb7e7f8.scope: Deactivated successfully. Feb 20 02:39:42 localhost systemd[1]: var-lib-containers-storage-overlay-fca765841b86c4f9f2a750ee357e955ff223fa636421ce710336477e8c673ede-merged.mount: Deactivated successfully. Feb 20 02:39:42 localhost podman[31536]: Feb 20 02:39:42 localhost podman[31536]: 2026-02-20 07:39:42.667648963 +0000 UTC m=+0.068679219 container create 4e04f88fd00d1fcbbf89e93382d973dd50d970143238e6d22014c48fef8ff13a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_lederberg, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, version=7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, architecture=x86_64, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, vcs-type=git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.) Feb 20 02:39:42 localhost systemd[1]: Started libpod-conmon-4e04f88fd00d1fcbbf89e93382d973dd50d970143238e6d22014c48fef8ff13a.scope. Feb 20 02:39:42 localhost systemd[1]: tmp-crun.eZhTe6.mount: Deactivated successfully. Feb 20 02:39:42 localhost systemd[1]: Started libcrun container. Feb 20 02:39:42 localhost podman[31536]: 2026-02-20 07:39:42.731219869 +0000 UTC m=+0.132250125 container init 4e04f88fd00d1fcbbf89e93382d973dd50d970143238e6d22014c48fef8ff13a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_lederberg, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, release=1770267347, ceph=True, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=, version=7) Feb 20 02:39:42 localhost podman[31536]: 2026-02-20 07:39:42.640142644 +0000 UTC m=+0.041172920 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:42 localhost podman[31536]: 2026-02-20 07:39:42.739871811 +0000 UTC m=+0.140902077 container start 4e04f88fd00d1fcbbf89e93382d973dd50d970143238e6d22014c48fef8ff13a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_lederberg, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, release=1770267347, name=rhceph, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7) Feb 20 02:39:42 localhost podman[31536]: 2026-02-20 07:39:42.742121788 +0000 UTC m=+0.143152044 container attach 4e04f88fd00d1fcbbf89e93382d973dd50d970143238e6d22014c48fef8ff13a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_lederberg, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., version=7, GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, build-date=2026-02-09T10:25:24Z) Feb 20 02:39:42 localhost exciting_lederberg[31551]: 167 167 Feb 20 02:39:42 localhost systemd[1]: libpod-4e04f88fd00d1fcbbf89e93382d973dd50d970143238e6d22014c48fef8ff13a.scope: Deactivated successfully. Feb 20 02:39:42 localhost podman[31536]: 2026-02-20 07:39:42.744944023 +0000 UTC m=+0.145974319 container died 4e04f88fd00d1fcbbf89e93382d973dd50d970143238e6d22014c48fef8ff13a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_lederberg, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, architecture=x86_64) Feb 20 02:39:42 localhost podman[31556]: 2026-02-20 07:39:42.834583839 +0000 UTC m=+0.076716723 container remove 4e04f88fd00d1fcbbf89e93382d973dd50d970143238e6d22014c48fef8ff13a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_lederberg, GIT_BRANCH=main, release=1770267347, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , RELEASE=main, io.buildah.version=1.42.2, ceph=True, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 02:39:42 localhost systemd[1]: libpod-conmon-4e04f88fd00d1fcbbf89e93382d973dd50d970143238e6d22014c48fef8ff13a.scope: Deactivated successfully. Feb 20 02:39:43 localhost systemd[1]: var-lib-containers-storage-overlay-e700292afc7e32961a92f7d0f4082d4b094847e107048a3b8bad97e1a24f5823-merged.mount: Deactivated successfully. Feb 20 02:39:43 localhost podman[31584]: Feb 20 02:39:43 localhost podman[31584]: 2026-02-20 07:39:43.157146971 +0000 UTC m=+0.073335871 container create 29baa557fbc7c08c688900aef4d55f3c4eb32863650059f73fd42ef0214f1288 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate-test, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=rhceph-container, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 02:39:43 localhost systemd[1]: Started libpod-conmon-29baa557fbc7c08c688900aef4d55f3c4eb32863650059f73fd42ef0214f1288.scope. Feb 20 02:39:43 localhost systemd[1]: Started libcrun container. Feb 20 02:39:43 localhost podman[31584]: 2026-02-20 07:39:43.126773775 +0000 UTC m=+0.042962695 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15b3917cb5b5dd5fc2750c828fb0927feece4ca75a6246d557248333c033a4e9/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15b3917cb5b5dd5fc2750c828fb0927feece4ca75a6246d557248333c033a4e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15b3917cb5b5dd5fc2750c828fb0927feece4ca75a6246d557248333c033a4e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15b3917cb5b5dd5fc2750c828fb0927feece4ca75a6246d557248333c033a4e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15b3917cb5b5dd5fc2750c828fb0927feece4ca75a6246d557248333c033a4e9/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:43 localhost podman[31584]: 2026-02-20 07:39:43.282359461 +0000 UTC m=+0.198548361 container init 29baa557fbc7c08c688900aef4d55f3c4eb32863650059f73fd42ef0214f1288 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate-test, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, ceph=True, maintainer=Guillaume Abrioux , RELEASE=main, io.buildah.version=1.42.2, release=1770267347, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 02:39:43 localhost podman[31584]: 2026-02-20 07:39:43.293351973 +0000 UTC m=+0.209540873 container start 29baa557fbc7c08c688900aef4d55f3c4eb32863650059f73fd42ef0214f1288 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate-test, RELEASE=main, maintainer=Guillaume Abrioux , version=7, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, release=1770267347, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 02:39:43 localhost podman[31584]: 2026-02-20 07:39:43.293601346 +0000 UTC m=+0.209790296 container attach 29baa557fbc7c08c688900aef4d55f3c4eb32863650059f73fd42ef0214f1288 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate-test, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, distribution-scope=public, release=1770267347, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.component=rhceph-container) Feb 20 02:39:43 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate-test[31599]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Feb 20 02:39:43 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate-test[31599]: [--no-systemd] [--no-tmpfs] Feb 20 02:39:43 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate-test[31599]: ceph-volume activate: error: unrecognized arguments: --bad-option Feb 20 02:39:43 localhost systemd[1]: libpod-29baa557fbc7c08c688900aef4d55f3c4eb32863650059f73fd42ef0214f1288.scope: Deactivated successfully. Feb 20 02:39:43 localhost podman[31584]: 2026-02-20 07:39:43.510570332 +0000 UTC m=+0.426759282 container died 29baa557fbc7c08c688900aef4d55f3c4eb32863650059f73fd42ef0214f1288 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate-test, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux ) Feb 20 02:39:43 localhost podman[31604]: 2026-02-20 07:39:43.600282502 +0000 UTC m=+0.079988569 container remove 29baa557fbc7c08c688900aef4d55f3c4eb32863650059f73fd42ef0214f1288 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate-test, RELEASE=main, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=1770267347, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 02:39:43 localhost systemd-journald[618]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Feb 20 02:39:43 localhost systemd-journald[618]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 20 02:39:43 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 02:39:43 localhost systemd[1]: libpod-conmon-29baa557fbc7c08c688900aef4d55f3c4eb32863650059f73fd42ef0214f1288.scope: Deactivated successfully. Feb 20 02:39:43 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 02:39:43 localhost systemd[1]: Reloading. Feb 20 02:39:43 localhost systemd-sysv-generator[31667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:39:43 localhost systemd-rc-local-generator[31664]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:39:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:39:44 localhost systemd[1]: var-lib-containers-storage-overlay-15b3917cb5b5dd5fc2750c828fb0927feece4ca75a6246d557248333c033a4e9-merged.mount: Deactivated successfully. Feb 20 02:39:44 localhost systemd[1]: Reloading. Feb 20 02:39:44 localhost systemd-rc-local-generator[31704]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:39:44 localhost systemd-sysv-generator[31707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:39:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:39:44 localhost systemd[1]: Starting Ceph osd.2 for a8557ee9-b55d-5519-942c-cf8f6172f1d8... Feb 20 02:39:44 localhost podman[31765]: Feb 20 02:39:44 localhost podman[31765]: 2026-02-20 07:39:44.795992131 +0000 UTC m=+0.075959436 container create bc28dad509457e8a7d5615691857fd7761b25eac745817557402c62a75e0af36 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate, build-date=2026-02-09T10:25:24Z, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, release=1770267347, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, version=7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, name=rhceph, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64) Feb 20 02:39:44 localhost systemd[1]: Started libcrun container. Feb 20 02:39:44 localhost podman[31765]: 2026-02-20 07:39:44.766208363 +0000 UTC m=+0.046175688 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf213561a2ff5628a94d552c3b198eb957b840876216484eab3e87bb4b76222/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf213561a2ff5628a94d552c3b198eb957b840876216484eab3e87bb4b76222/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf213561a2ff5628a94d552c3b198eb957b840876216484eab3e87bb4b76222/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf213561a2ff5628a94d552c3b198eb957b840876216484eab3e87bb4b76222/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf213561a2ff5628a94d552c3b198eb957b840876216484eab3e87bb4b76222/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:44 localhost podman[31765]: 2026-02-20 07:39:44.923208346 +0000 UTC m=+0.203175661 container init bc28dad509457e8a7d5615691857fd7761b25eac745817557402c62a75e0af36 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 02:39:44 localhost podman[31765]: 2026-02-20 07:39:44.934403689 +0000 UTC m=+0.214370994 container start bc28dad509457e8a7d5615691857fd7761b25eac745817557402c62a75e0af36 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., version=7, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, architecture=x86_64, release=1770267347, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, RELEASE=main) Feb 20 02:39:44 localhost podman[31765]: 2026-02-20 07:39:44.934743525 +0000 UTC m=+0.214710840 container attach bc28dad509457e8a7d5615691857fd7761b25eac745817557402c62a75e0af36 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate, vendor=Red Hat, Inc., ceph=True, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 02:39:45 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate[31779]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Feb 20 02:39:45 localhost bash[31765]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Feb 20 02:39:45 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate[31779]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Feb 20 02:39:45 localhost bash[31765]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Feb 20 02:39:45 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate[31779]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Feb 20 02:39:45 localhost bash[31765]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Feb 20 02:39:45 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate[31779]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Feb 20 02:39:45 localhost bash[31765]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Feb 20 02:39:45 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate[31779]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block Feb 20 02:39:45 localhost bash[31765]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block Feb 20 02:39:45 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate[31779]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Feb 20 02:39:45 localhost bash[31765]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Feb 20 02:39:45 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate[31779]: --> ceph-volume raw activate successful for osd ID: 2 Feb 20 02:39:45 localhost bash[31765]: --> ceph-volume raw activate successful for osd ID: 2 Feb 20 02:39:45 localhost systemd[1]: libpod-bc28dad509457e8a7d5615691857fd7761b25eac745817557402c62a75e0af36.scope: Deactivated successfully. Feb 20 02:39:45 localhost podman[31765]: 2026-02-20 07:39:45.617569994 +0000 UTC m=+0.897537309 container died bc28dad509457e8a7d5615691857fd7761b25eac745817557402c62a75e0af36 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, name=rhceph, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container) Feb 20 02:39:45 localhost systemd[1]: var-lib-containers-storage-overlay-daf213561a2ff5628a94d552c3b198eb957b840876216484eab3e87bb4b76222-merged.mount: Deactivated successfully. Feb 20 02:39:45 localhost podman[31904]: 2026-02-20 07:39:45.689636514 +0000 UTC m=+0.064114413 container remove bc28dad509457e8a7d5615691857fd7761b25eac745817557402c62a75e0af36 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2-activate, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1770267347, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True) Feb 20 02:39:46 localhost podman[31963]: Feb 20 02:39:46 localhost podman[31963]: 2026-02-20 07:39:46.016929282 +0000 UTC m=+0.071484474 container create 806e289854b2c674fa917f0b04bed9fa9061cab7e312e754ce2f4b22c3a90c73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, vcs-type=git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True) Feb 20 02:39:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cdf31a6b664aae428837775ee1a00ca584df6500a1102426258f0596051ab3/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cdf31a6b664aae428837775ee1a00ca584df6500a1102426258f0596051ab3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:46 localhost podman[31963]: 2026-02-20 07:39:45.989253324 +0000 UTC m=+0.043808546 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cdf31a6b664aae428837775ee1a00ca584df6500a1102426258f0596051ab3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cdf31a6b664aae428837775ee1a00ca584df6500a1102426258f0596051ab3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cdf31a6b664aae428837775ee1a00ca584df6500a1102426258f0596051ab3/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:46 localhost podman[31963]: 2026-02-20 07:39:46.130542638 +0000 UTC m=+0.185097830 container init 806e289854b2c674fa917f0b04bed9fa9061cab7e312e754ce2f4b22c3a90c73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 02:39:46 localhost podman[31963]: 2026-02-20 07:39:46.140755845 +0000 UTC m=+0.195311047 container start 806e289854b2c674fa917f0b04bed9fa9061cab7e312e754ce2f4b22c3a90c73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, RELEASE=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, GIT_BRANCH=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 02:39:46 localhost bash[31963]: 806e289854b2c674fa917f0b04bed9fa9061cab7e312e754ce2f4b22c3a90c73 Feb 20 02:39:46 localhost systemd[1]: Started Ceph osd.2 for a8557ee9-b55d-5519-942c-cf8f6172f1d8. Feb 20 02:39:46 localhost ceph-osd[31981]: set uid:gid to 167:167 (ceph:ceph) Feb 20 02:39:46 localhost ceph-osd[31981]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-osd, pid 2 Feb 20 02:39:46 localhost ceph-osd[31981]: pidfile_write: ignore empty --pid-file Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:46 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67b180 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67b180 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67b180 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:46 localhost ceph-osd[31981]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67b180 /var/lib/ceph/osd/ceph-2/block) close Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) close Feb 20 02:39:46 localhost ceph-osd[31981]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal Feb 20 02:39:46 localhost ceph-osd[31981]: load: jerasure load: lrc Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:46 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) close Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:46 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 20 02:39:46 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) close Feb 20 02:39:46 localhost podman[32073]: Feb 20 02:39:47 localhost podman[32073]: 2026-02-20 07:39:47.008057893 +0000 UTC m=+0.083143697 container create 0fcf0c2f548ec783645e043396485d3500c393ba29a50a54d337e71cf5f6595d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_poitras, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=1770267347, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, ceph=True, name=rhceph, build-date=2026-02-09T10:25:24Z, distribution-scope=public, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, description=Red Hat Ceph Storage 7) Feb 20 02:39:47 localhost systemd[1]: Started libpod-conmon-0fcf0c2f548ec783645e043396485d3500c393ba29a50a54d337e71cf5f6595d.scope. Feb 20 02:39:47 localhost systemd[1]: Started libcrun container. Feb 20 02:39:47 localhost podman[32073]: 2026-02-20 07:39:46.977159143 +0000 UTC m=+0.052244977 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:47 localhost podman[32073]: 2026-02-20 07:39:47.084836218 +0000 UTC m=+0.159922022 container init 0fcf0c2f548ec783645e043396485d3500c393ba29a50a54d337e71cf5f6595d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_poitras, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, ceph=True) Feb 20 02:39:47 localhost podman[32073]: 2026-02-20 07:39:47.096544525 +0000 UTC m=+0.171630329 container start 0fcf0c2f548ec783645e043396485d3500c393ba29a50a54d337e71cf5f6595d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_poitras, name=rhceph, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, version=7, build-date=2026-02-09T10:25:24Z, release=1770267347, description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 02:39:47 localhost podman[32073]: 2026-02-20 07:39:47.096803507 +0000 UTC m=+0.171889361 container attach 0fcf0c2f548ec783645e043396485d3500c393ba29a50a54d337e71cf5f6595d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_poitras, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, distribution-scope=public, vendor=Red Hat, Inc.) Feb 20 02:39:47 localhost frosty_poitras[32093]: 167 167 Feb 20 02:39:47 localhost systemd[1]: libpod-0fcf0c2f548ec783645e043396485d3500c393ba29a50a54d337e71cf5f6595d.scope: Deactivated successfully. Feb 20 02:39:47 localhost podman[32073]: 2026-02-20 07:39:47.102187004 +0000 UTC m=+0.177272878 container died 0fcf0c2f548ec783645e043396485d3500c393ba29a50a54d337e71cf5f6595d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_poitras, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, release=1770267347, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, ceph=True, version=7, GIT_CLEAN=True, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 02:39:47 localhost podman[32098]: 2026-02-20 07:39:47.200415039 +0000 UTC m=+0.084083903 container remove 0fcf0c2f548ec783645e043396485d3500c393ba29a50a54d337e71cf5f6595d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_poitras, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , version=7, RELEASE=main, architecture=x86_64, vcs-type=git, release=1770267347, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 02:39:47 localhost systemd[1]: libpod-conmon-0fcf0c2f548ec783645e043396485d3500c393ba29a50a54d337e71cf5f6595d.scope: Deactivated successfully. Feb 20 02:39:47 localhost ceph-osd[31981]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Feb 20 02:39:47 localhost ceph-osd[31981]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Feb 20 02:39:47 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 20 02:39:47 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 20 02:39:47 localhost ceph-osd[31981]: bdev(0x55d37d67ae00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:47 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 20 02:39:47 localhost ceph-osd[31981]: bdev(0x55d37d67b180 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 20 02:39:47 localhost ceph-osd[31981]: bdev(0x55d37d67b180 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 20 02:39:47 localhost ceph-osd[31981]: bdev(0x55d37d67b180 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:47 localhost ceph-osd[31981]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB Feb 20 02:39:47 localhost ceph-osd[31981]: bluefs mount Feb 20 02:39:47 localhost ceph-osd[31981]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Feb 20 02:39:47 localhost ceph-osd[31981]: bluefs mount shared_bdev_used = 0 Feb 20 02:39:47 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: RocksDB version: 7.9.2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Git sha 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Compile date 2026-02-06 00:00:00 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: DB SUMMARY Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: DB Session ID: 37A4U4SPYC3WBAF4P4TU Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: CURRENT file: CURRENT Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: IDENTITY file: IDENTITY Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.error_if_exists: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.create_if_missing: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.env: 0x55d37e49fe30 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.fs: LegacyFileSystem Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.info_log: 0x55d37e61cd00 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_file_opening_threads: 16 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.statistics: (nil) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.use_fsync: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_log_file_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.log_file_time_to_roll: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.keep_log_file_num: 1000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.recycle_log_file_num: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.allow_fallocate: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.allow_mmap_reads: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.allow_mmap_writes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.use_direct_reads: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.create_missing_column_families: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.db_log_dir: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.wal_dir: db.wal Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_cache_numshardbits: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.advise_random_on_open: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.db_write_buffer_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_manager: 0x55d37d6654a0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.use_adaptive_mutex: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.rate_limiter: (nil) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.wal_recovery_mode: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_thread_tracking: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_pipelined_write: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.unordered_write: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.row_cache: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.wal_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.allow_ingest_behind: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.two_write_queues: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.manual_wal_flush: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.wal_compression: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.atomic_flush: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.persist_stats_to_disk: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.log_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.best_efforts_recovery: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.allow_data_in_errors: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.db_host_id: __hostname__ Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enforce_single_del_contracts: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_background_jobs: 4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_background_compactions: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_subcompactions: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.writable_file_max_buffer_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.delayed_write_rate : 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_total_wal_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.stats_dump_period_sec: 600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.stats_persist_period_sec: 600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_open_files: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bytes_per_sync: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_readahead_size: 2097152 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_background_flushes: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Compression algorithms supported: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kZSTD supported: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kXpressCompression supported: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kBZip2Compression supported: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kLZ4Compression supported: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kZlibCompression supported: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kSnappyCompression supported: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: DMutex implementation: pthread_mutex_t Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e61cec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d652850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e61cec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d652850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e61cec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d652850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e61cec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d652850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e61cec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d652850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e61cec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d652850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e61cec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d652850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e61d0e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d6522d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e61d0e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d6522d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e61d0e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d6522d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c00a6f7c-4ed2-487f-a672-2199ada8ae0f Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187286369, "job": 1, "event": "recovery_started", "wal_files": [31]} Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187286697, "job": 1, "event": "recovery_finished"} Feb 20 02:39:47 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Feb 20 02:39:47 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025 Feb 20 02:39:47 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240 Feb 20 02:39:47 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Feb 20 02:39:47 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000 Feb 20 02:39:47 localhost ceph-osd[31981]: freelist init Feb 20 02:39:47 localhost ceph-osd[31981]: freelist _read_cfg Feb 20 02:39:47 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Feb 20 02:39:47 localhost ceph-osd[31981]: bluefs umount Feb 20 02:39:47 localhost ceph-osd[31981]: bdev(0x55d37d67b180 /var/lib/ceph/osd/ceph-2/block) close Feb 20 02:39:47 localhost podman[32323]: Feb 20 02:39:47 localhost podman[32323]: 2026-02-20 07:39:47.512774756 +0000 UTC m=+0.063180179 container create 763a1b8eed09982c94dc8f7c1f3ed2ed576957919520f1f1daa84f4b300b4392 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate-test, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , vcs-type=git, release=1770267347, ceph=True, io.buildah.version=1.42.2) Feb 20 02:39:47 localhost ceph-osd[31981]: bdev(0x55d37d67b180 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Feb 20 02:39:47 localhost ceph-osd[31981]: bdev(0x55d37d67b180 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Feb 20 02:39:47 localhost ceph-osd[31981]: bdev(0x55d37d67b180 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:47 localhost ceph-osd[31981]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB Feb 20 02:39:47 localhost ceph-osd[31981]: bluefs mount Feb 20 02:39:47 localhost ceph-osd[31981]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Feb 20 02:39:47 localhost ceph-osd[31981]: bluefs mount shared_bdev_used = 4718592 Feb 20 02:39:47 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: RocksDB version: 7.9.2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Git sha 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Compile date 2026-02-06 00:00:00 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: DB SUMMARY Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: DB Session ID: 37A4U4SPYC3WBAF4P4TV Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: CURRENT file: CURRENT Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: IDENTITY file: IDENTITY Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.error_if_exists: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.create_if_missing: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.env: 0x55d37d90e8c0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.fs: LegacyFileSystem Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.info_log: 0x55d37e6c63a0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_file_opening_threads: 16 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.statistics: (nil) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.use_fsync: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_log_file_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.log_file_time_to_roll: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.keep_log_file_num: 1000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.recycle_log_file_num: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.allow_fallocate: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.allow_mmap_reads: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.allow_mmap_writes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.use_direct_reads: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.create_missing_column_families: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.db_log_dir: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.wal_dir: db.wal Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_cache_numshardbits: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.advise_random_on_open: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.db_write_buffer_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_manager: 0x55d37d6654a0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.use_adaptive_mutex: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.rate_limiter: (nil) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.wal_recovery_mode: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_thread_tracking: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_pipelined_write: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.unordered_write: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.row_cache: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.wal_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.allow_ingest_behind: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.two_write_queues: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.manual_wal_flush: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.wal_compression: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.atomic_flush: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.persist_stats_to_disk: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.log_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.best_efforts_recovery: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.allow_data_in_errors: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.db_host_id: __hostname__ Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enforce_single_del_contracts: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_background_jobs: 4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_background_compactions: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_subcompactions: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.writable_file_max_buffer_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.delayed_write_rate : 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_total_wal_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.stats_dump_period_sec: 600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.stats_persist_period_sec: 600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_open_files: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bytes_per_sync: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_readahead_size: 2097152 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_background_flushes: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Compression algorithms supported: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kZSTD supported: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kXpressCompression supported: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kBZip2Compression supported: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kLZ4Compression supported: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kZlibCompression supported: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: #011kSnappyCompression supported: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: DMutex implementation: pthread_mutex_t Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e6c64e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d6522d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e6c64e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d6522d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e6c64e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d6522d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e6c64e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d6522d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost systemd[1]: Started libpod-conmon-763a1b8eed09982c94dc8f7c1f3ed2ed576957919520f1f1daa84f4b300b4392.scope. Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e6c64e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d6522d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e6c64e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d6522d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e6c64e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d6522d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e6c6720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d653610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e6c6720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d653610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.merge_operator: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d37e6c6720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d37d653610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression: LZ4 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.num_levels: 7 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3c6b8e8cede4ce596f663339f8d196bd0085d7480a537f76e718bd0710d8d5/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:47 localhost systemd[1]: Started libcrun container. Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c00a6f7c-4ed2-487f-a672-2199ada8ae0f Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187565000, "job": 1, "event": "recovery_started", "wal_files": [31]} Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187572615, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771573187, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c00a6f7c-4ed2-487f-a672-2199ada8ae0f", "db_session_id": "37A4U4SPYC3WBAF4P4TV", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187579329, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771573187, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c00a6f7c-4ed2-487f-a672-2199ada8ae0f", "db_session_id": "37A4U4SPYC3WBAF4P4TV", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Feb 20 02:39:47 localhost podman[32323]: 2026-02-20 07:39:47.483737163 +0000 UTC m=+0.034142636 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187584023, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771573187, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c00a6f7c-4ed2-487f-a672-2199ada8ae0f", "db_session_id": "37A4U4SPYC3WBAF4P4TV", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187589378, "job": 1, "event": "recovery_finished"} Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Feb 20 02:39:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3c6b8e8cede4ce596f663339f8d196bd0085d7480a537f76e718bd0710d8d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3c6b8e8cede4ce596f663339f8d196bd0085d7480a537f76e718bd0710d8d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3c6b8e8cede4ce596f663339f8d196bd0085d7480a537f76e718bd0710d8d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d37e4ba700 Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: DB pointer 0x55d37d6ada00 Feb 20 02:39:47 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Feb 20 02:39:47 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4 Feb 20 02:39:47 localhost ceph-osd[31981]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 02:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d37d6522d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d37d6522d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d37d6522d0#2 capacity: 460.80 MB usag Feb 20 02:39:47 localhost ceph-osd[31981]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Feb 20 02:39:47 localhost ceph-osd[31981]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Feb 20 02:39:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3c6b8e8cede4ce596f663339f8d196bd0085d7480a537f76e718bd0710d8d5/merged/var/lib/ceph/osd/ceph-5 supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:47 localhost ceph-osd[31981]: _get_class not permitted to load lua Feb 20 02:39:47 localhost ceph-osd[31981]: _get_class not permitted to load sdk Feb 20 02:39:47 localhost ceph-osd[31981]: _get_class not permitted to load test_remote_reads Feb 20 02:39:47 localhost podman[32323]: 2026-02-20 07:39:47.632015961 +0000 UTC m=+0.182421354 container init 763a1b8eed09982c94dc8f7c1f3ed2ed576957919520f1f1daa84f4b300b4392 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate-test, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, release=1770267347, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, version=7, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 02:39:47 localhost ceph-osd[31981]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients Feb 20 02:39:47 localhost ceph-osd[31981]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Feb 20 02:39:47 localhost ceph-osd[31981]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds Feb 20 02:39:47 localhost ceph-osd[31981]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Feb 20 02:39:47 localhost ceph-osd[31981]: osd.2 0 load_pgs Feb 20 02:39:47 localhost ceph-osd[31981]: osd.2 0 load_pgs opened 0 pgs Feb 20 02:39:47 localhost ceph-osd[31981]: osd.2 0 log_to_monitors true Feb 20 02:39:47 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2[31977]: 2026-02-20T07:39:47.632+0000 7f4378586a80 -1 osd.2 0 log_to_monitors true Feb 20 02:39:47 localhost podman[32323]: 2026-02-20 07:39:47.643500627 +0000 UTC m=+0.193906020 container start 763a1b8eed09982c94dc8f7c1f3ed2ed576957919520f1f1daa84f4b300b4392 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate-test, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1770267347, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_BRANCH=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, name=rhceph, GIT_CLEAN=True) Feb 20 02:39:47 localhost podman[32323]: 2026-02-20 07:39:47.643609152 +0000 UTC m=+0.194014545 container attach 763a1b8eed09982c94dc8f7c1f3ed2ed576957919520f1f1daa84f4b300b4392 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate-test, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.42.2, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph) Feb 20 02:39:47 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate-test[32424]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Feb 20 02:39:47 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate-test[32424]: [--no-systemd] [--no-tmpfs] Feb 20 02:39:47 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate-test[32424]: ceph-volume activate: error: unrecognized arguments: --bad-option Feb 20 02:39:47 localhost systemd[1]: libpod-763a1b8eed09982c94dc8f7c1f3ed2ed576957919520f1f1daa84f4b300b4392.scope: Deactivated successfully. Feb 20 02:39:47 localhost podman[32323]: 2026-02-20 07:39:47.861213329 +0000 UTC m=+0.411618802 container died 763a1b8eed09982c94dc8f7c1f3ed2ed576957919520f1f1daa84f4b300b4392 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate-test, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-type=git, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, com.redhat.component=rhceph-container, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main) Feb 20 02:39:47 localhost podman[32559]: 2026-02-20 07:39:47.948952245 +0000 UTC m=+0.075958096 container remove 763a1b8eed09982c94dc8f7c1f3ed2ed576957919520f1f1daa84f4b300b4392 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate-test, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , release=1770267347, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, distribution-scope=public, architecture=x86_64, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 02:39:47 localhost systemd[1]: libpod-conmon-763a1b8eed09982c94dc8f7c1f3ed2ed576957919520f1f1daa84f4b300b4392.scope: Deactivated successfully. Feb 20 02:39:48 localhost systemd[1]: var-lib-containers-storage-overlay-145c64611fa0db0de2b7414131e868115c7fb90578e7aaa9065b1cc235848d9b-merged.mount: Deactivated successfully. Feb 20 02:39:48 localhost systemd[1]: Reloading. Feb 20 02:39:48 localhost systemd-sysv-generator[32616]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:39:48 localhost systemd-rc-local-generator[32610]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:39:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:39:48 localhost systemd[1]: Reloading. Feb 20 02:39:48 localhost systemd-rc-local-generator[32651]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:39:48 localhost systemd-sysv-generator[32656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:39:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:39:48 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Feb 20 02:39:48 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Feb 20 02:39:48 localhost ceph-osd[31981]: osd.2 0 done with init, starting boot process Feb 20 02:39:48 localhost ceph-osd[31981]: osd.2 0 start_boot Feb 20 02:39:48 localhost ceph-osd[31981]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1 Feb 20 02:39:48 localhost ceph-osd[31981]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Feb 20 02:39:48 localhost ceph-osd[31981]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Feb 20 02:39:48 localhost ceph-osd[31981]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Feb 20 02:39:48 localhost ceph-osd[31981]: osd.2 0 bench count 12288000 bsize 4 KiB Feb 20 02:39:48 localhost systemd[1]: Starting Ceph osd.5 for a8557ee9-b55d-5519-942c-cf8f6172f1d8... Feb 20 02:39:49 localhost podman[32716]: Feb 20 02:39:49 localhost podman[32716]: 2026-02-20 07:39:49.093730821 +0000 UTC m=+0.093788636 container create 5f784ba3a634ac5bb5fef14211741deae6aecbe24e2a67bb33e8d431c5387841 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, release=1770267347, io.openshift.expose-services=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main) Feb 20 02:39:49 localhost systemd[1]: Started libcrun container. Feb 20 02:39:49 localhost podman[32716]: 2026-02-20 07:39:49.045062195 +0000 UTC m=+0.045120040 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794131f632a17b8bf79717b8e02cef243023f63a36c34862a06942a41e50ab6a/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794131f632a17b8bf79717b8e02cef243023f63a36c34862a06942a41e50ab6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794131f632a17b8bf79717b8e02cef243023f63a36c34862a06942a41e50ab6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794131f632a17b8bf79717b8e02cef243023f63a36c34862a06942a41e50ab6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794131f632a17b8bf79717b8e02cef243023f63a36c34862a06942a41e50ab6a/merged/var/lib/ceph/osd/ceph-5 supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:49 localhost podman[32716]: 2026-02-20 07:39:49.205767273 +0000 UTC m=+0.205825148 container init 5f784ba3a634ac5bb5fef14211741deae6aecbe24e2a67bb33e8d431c5387841 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z) Feb 20 02:39:49 localhost podman[32716]: 2026-02-20 07:39:49.221940353 +0000 UTC m=+0.221998188 container start 5f784ba3a634ac5bb5fef14211741deae6aecbe24e2a67bb33e8d431c5387841 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.42.2, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, architecture=x86_64, maintainer=Guillaume Abrioux , name=rhceph, vcs-type=git) Feb 20 02:39:49 localhost podman[32716]: 2026-02-20 07:39:49.222743351 +0000 UTC m=+0.222801196 container attach 5f784ba3a634ac5bb5fef14211741deae6aecbe24e2a67bb33e8d431c5387841 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 02:39:49 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate[32728]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Feb 20 02:39:49 localhost bash[32716]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Feb 20 02:39:49 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate[32728]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-5 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Feb 20 02:39:49 localhost bash[32716]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-5 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Feb 20 02:39:49 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate[32728]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Feb 20 02:39:49 localhost bash[32716]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Feb 20 02:39:49 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate[32728]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Feb 20 02:39:49 localhost bash[32716]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Feb 20 02:39:49 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate[32728]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-5/block Feb 20 02:39:49 localhost bash[32716]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-5/block Feb 20 02:39:49 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate[32728]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Feb 20 02:39:49 localhost bash[32716]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Feb 20 02:39:49 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate[32728]: --> ceph-volume raw activate successful for osd ID: 5 Feb 20 02:39:49 localhost bash[32716]: --> ceph-volume raw activate successful for osd ID: 5 Feb 20 02:39:49 localhost systemd[1]: libpod-5f784ba3a634ac5bb5fef14211741deae6aecbe24e2a67bb33e8d431c5387841.scope: Deactivated successfully. Feb 20 02:39:49 localhost podman[32716]: 2026-02-20 07:39:49.875064928 +0000 UTC m=+0.875122793 container died 5f784ba3a634ac5bb5fef14211741deae6aecbe24e2a67bb33e8d431c5387841 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, release=1770267347, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 02:39:49 localhost systemd[1]: var-lib-containers-storage-overlay-794131f632a17b8bf79717b8e02cef243023f63a36c34862a06942a41e50ab6a-merged.mount: Deactivated successfully. Feb 20 02:39:49 localhost podman[32843]: 2026-02-20 07:39:49.970199876 +0000 UTC m=+0.085233898 container remove 5f784ba3a634ac5bb5fef14211741deae6aecbe24e2a67bb33e8d431c5387841 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5-activate, name=rhceph, distribution-scope=public, maintainer=Guillaume Abrioux , ceph=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z) Feb 20 02:39:50 localhost podman[32903]: Feb 20 02:39:50 localhost podman[32903]: 2026-02-20 07:39:50.300695536 +0000 UTC m=+0.078130560 container create ea22b604e3a7b9b60846c88f6a7d19f0bb3981a58a0a248c34ccd609952fd02e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, name=rhceph, release=1770267347, io.openshift.expose-services=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 02:39:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfbfdf821903a8a8befccd274e1c5d106f6a9bfaf2e9345fc42cc68cb01e342/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfbfdf821903a8a8befccd274e1c5d106f6a9bfaf2e9345fc42cc68cb01e342/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:50 localhost podman[32903]: 2026-02-20 07:39:50.27263267 +0000 UTC m=+0.050067694 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfbfdf821903a8a8befccd274e1c5d106f6a9bfaf2e9345fc42cc68cb01e342/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfbfdf821903a8a8befccd274e1c5d106f6a9bfaf2e9345fc42cc68cb01e342/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dfbfdf821903a8a8befccd274e1c5d106f6a9bfaf2e9345fc42cc68cb01e342/merged/var/lib/ceph/osd/ceph-5 supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:50 localhost podman[32903]: 2026-02-20 07:39:50.421786579 +0000 UTC m=+0.199221603 container init ea22b604e3a7b9b60846c88f6a7d19f0bb3981a58a0a248c34ccd609952fd02e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5, ceph=True, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.buildah.version=1.42.2, RELEASE=main, distribution-scope=public, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z) Feb 20 02:39:50 localhost podman[32903]: 2026-02-20 07:39:50.439126854 +0000 UTC m=+0.216561878 container start ea22b604e3a7b9b60846c88f6a7d19f0bb3981a58a0a248c34ccd609952fd02e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5, architecture=x86_64, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, name=rhceph, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , io.openshift.expose-services=) Feb 20 02:39:50 localhost bash[32903]: ea22b604e3a7b9b60846c88f6a7d19f0bb3981a58a0a248c34ccd609952fd02e Feb 20 02:39:50 localhost systemd[1]: Started Ceph osd.5 for a8557ee9-b55d-5519-942c-cf8f6172f1d8. Feb 20 02:39:50 localhost ceph-osd[32921]: set uid:gid to 167:167 (ceph:ceph) Feb 20 02:39:50 localhost ceph-osd[32921]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-osd, pid 2 Feb 20 02:39:50 localhost ceph-osd[32921]: pidfile_write: ignore empty --pid-file Feb 20 02:39:50 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 20 02:39:50 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 20 02:39:50 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:50 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 20 02:39:50 localhost ceph-osd[32921]: bdev(0x55bae83f3180 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 20 02:39:50 localhost ceph-osd[32921]: bdev(0x55bae83f3180 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 20 02:39:50 localhost ceph-osd[32921]: bdev(0x55bae83f3180 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:50 localhost ceph-osd[32921]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-5/block size 7.0 GiB Feb 20 02:39:50 localhost ceph-osd[32921]: bdev(0x55bae83f3180 /var/lib/ceph/osd/ceph-5/block) close Feb 20 02:39:50 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) close Feb 20 02:39:50 localhost ceph-osd[31981]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 34.435 iops: 8815.281 elapsed_sec: 0.340 Feb 20 02:39:50 localhost ceph-osd[31981]: log_channel(cluster) log [WRN] : OSD bench result of 8815.280979 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Feb 20 02:39:50 localhost ceph-osd[31981]: osd.2 0 waiting for initial osdmap Feb 20 02:39:50 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2[31977]: 2026-02-20T07:39:50.865+0000 7f4374d1a640 -1 osd.2 0 waiting for initial osdmap Feb 20 02:39:50 localhost ceph-osd[31981]: osd.2 10 crush map has features 288514050185494528, adjusting msgr requires for clients Feb 20 02:39:50 localhost ceph-osd[31981]: osd.2 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Feb 20 02:39:50 localhost ceph-osd[31981]: osd.2 10 crush map has features 3314932999778484224, adjusting msgr requires for osds Feb 20 02:39:50 localhost ceph-osd[31981]: osd.2 10 check_osdmap_features require_osd_release unknown -> reef Feb 20 02:39:50 localhost ceph-osd[31981]: osd.2 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Feb 20 02:39:50 localhost ceph-osd[31981]: osd.2 10 set_numa_affinity not setting numa affinity Feb 20 02:39:50 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-2[31977]: 2026-02-20T07:39:50.878+0000 7f436fb2f640 -1 osd.2 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Feb 20 02:39:50 localhost ceph-osd[31981]: osd.2 10 _collect_metadata loop3: no unique device id for loop3: fallback method has no model nor serial Feb 20 02:39:51 localhost ceph-osd[32921]: starting osd.5 osd_data /var/lib/ceph/osd/ceph-5 /var/lib/ceph/osd/ceph-5/journal Feb 20 02:39:51 localhost ceph-osd[32921]: load: jerasure load: lrc Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) close Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) close Feb 20 02:39:51 localhost podman[33013]: Feb 20 02:39:51 localhost podman[33013]: 2026-02-20 07:39:51.230730101 +0000 UTC m=+0.069541711 container create 16877e4d7bf65f9003d20391b2f49549d58f0603e93693f7c80363d5351f576b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_lehmann, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-type=git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container) Feb 20 02:39:51 localhost systemd[1]: Started libpod-conmon-16877e4d7bf65f9003d20391b2f49549d58f0603e93693f7c80363d5351f576b.scope. Feb 20 02:39:51 localhost systemd[1]: Started libcrun container. Feb 20 02:39:51 localhost podman[33013]: 2026-02-20 07:39:51.289250826 +0000 UTC m=+0.128062446 container init 16877e4d7bf65f9003d20391b2f49549d58f0603e93693f7c80363d5351f576b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_lehmann, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 02:39:51 localhost podman[33013]: 2026-02-20 07:39:51.298775329 +0000 UTC m=+0.137586969 container start 16877e4d7bf65f9003d20391b2f49549d58f0603e93693f7c80363d5351f576b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_lehmann, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 02:39:51 localhost podman[33013]: 2026-02-20 07:39:51.299564986 +0000 UTC m=+0.138376606 container attach 16877e4d7bf65f9003d20391b2f49549d58f0603e93693f7c80363d5351f576b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_lehmann, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, name=rhceph, ceph=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux ) Feb 20 02:39:51 localhost bold_lehmann[33029]: 167 167 Feb 20 02:39:51 localhost systemd[1]: libpod-16877e4d7bf65f9003d20391b2f49549d58f0603e93693f7c80363d5351f576b.scope: Deactivated successfully. Feb 20 02:39:51 localhost podman[33013]: 2026-02-20 07:39:51.202696086 +0000 UTC m=+0.041507696 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:51 localhost podman[33013]: 2026-02-20 07:39:51.302188781 +0000 UTC m=+0.141000401 container died 16877e4d7bf65f9003d20391b2f49549d58f0603e93693f7c80363d5351f576b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_lehmann, version=7, release=1770267347, name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux ) Feb 20 02:39:51 localhost ceph-osd[32921]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Feb 20 02:39:51 localhost ceph-osd[32921]: osd.5:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f2e00 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f3180 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f3180 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f3180 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:51 localhost ceph-osd[32921]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-5/block size 7.0 GiB Feb 20 02:39:51 localhost ceph-osd[32921]: bluefs mount Feb 20 02:39:51 localhost ceph-osd[32921]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Feb 20 02:39:51 localhost ceph-osd[32921]: bluefs mount shared_bdev_used = 0 Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: RocksDB version: 7.9.2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Git sha 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Compile date 2026-02-06 00:00:00 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: DB SUMMARY Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: DB Session ID: CQ94N9TDRUMNQZ7VIMV1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: CURRENT file: CURRENT Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: IDENTITY file: IDENTITY Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.error_if_exists: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.create_if_missing: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.env: 0x55bae8686cb0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.fs: LegacyFileSystem Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.info_log: 0x55bae9388b80 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_file_opening_threads: 16 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.statistics: (nil) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.use_fsync: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_log_file_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.log_file_time_to_roll: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.keep_log_file_num: 1000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.recycle_log_file_num: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.allow_fallocate: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.allow_mmap_reads: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.allow_mmap_writes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.use_direct_reads: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.create_missing_column_families: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.db_log_dir: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.wal_dir: db.wal Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_cache_numshardbits: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.advise_random_on_open: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.db_write_buffer_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_manager: 0x55bae83dc140 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.use_adaptive_mutex: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.rate_limiter: (nil) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.wal_recovery_mode: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_thread_tracking: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_pipelined_write: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.unordered_write: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.row_cache: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.wal_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.allow_ingest_behind: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.two_write_queues: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.manual_wal_flush: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.wal_compression: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.atomic_flush: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.persist_stats_to_disk: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.log_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.best_efforts_recovery: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.allow_data_in_errors: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.db_host_id: __hostname__ Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enforce_single_del_contracts: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_background_jobs: 4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_background_compactions: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_subcompactions: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.writable_file_max_buffer_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.delayed_write_rate : 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_total_wal_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.stats_dump_period_sec: 600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.stats_persist_period_sec: 600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_open_files: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bytes_per_sync: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_readahead_size: 2097152 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_background_flushes: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Compression algorithms supported: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kZSTD supported: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kXpressCompression supported: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kBZip2Compression supported: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kLZ4Compression supported: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kZlibCompression supported: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kSnappyCompression supported: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: DMutex implementation: pthread_mutex_t Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae9388d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae9388d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae9388d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae9388d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae9388d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae9388d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae9388d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae9388f60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae9388f60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae9388f60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c8e7abe3-7880-4ba3-835f-2c0d7ddf01eb Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191336372, "job": 1, "event": "recovery_started", "wal_files": [31]} Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191336654, "job": 1, "event": "recovery_finished"} Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _open_super_meta old nid_max 1025 Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _open_super_meta old blobid_max 10240 Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _open_super_meta min_alloc_size 0x1000 Feb 20 02:39:51 localhost ceph-osd[32921]: freelist init Feb 20 02:39:51 localhost ceph-osd[32921]: freelist _read_cfg Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Feb 20 02:39:51 localhost ceph-osd[32921]: bluefs umount Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f3180 /var/lib/ceph/osd/ceph-5/block) close Feb 20 02:39:51 localhost systemd[1]: var-lib-containers-storage-overlay-0952efa25250480a8d5f0b48c7a307b7d255cdfee47385bf403d7f164e7c20af-merged.mount: Deactivated successfully. Feb 20 02:39:51 localhost podman[33034]: 2026-02-20 07:39:51.443271867 +0000 UTC m=+0.123675749 container remove 16877e4d7bf65f9003d20391b2f49549d58f0603e93693f7c80363d5351f576b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_lehmann, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, ceph=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph) Feb 20 02:39:51 localhost systemd[1]: libpod-conmon-16877e4d7bf65f9003d20391b2f49549d58f0603e93693f7c80363d5351f576b.scope: Deactivated successfully. Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f3180 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f3180 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Feb 20 02:39:51 localhost ceph-osd[32921]: bdev(0x55bae83f3180 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Feb 20 02:39:51 localhost ceph-osd[32921]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-5/block size 7.0 GiB Feb 20 02:39:51 localhost ceph-osd[32921]: bluefs mount Feb 20 02:39:51 localhost ceph-osd[32921]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Feb 20 02:39:51 localhost ceph-osd[32921]: bluefs mount shared_bdev_used = 4718592 Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: RocksDB version: 7.9.2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Git sha 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Compile date 2026-02-06 00:00:00 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: DB SUMMARY Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: DB Session ID: CQ94N9TDRUMNQZ7VIMV0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: CURRENT file: CURRENT Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: IDENTITY file: IDENTITY Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.error_if_exists: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.create_if_missing: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.env: 0x55bae8518310 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.fs: LegacyFileSystem Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.info_log: 0x55bae9389c80 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_file_opening_threads: 16 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.statistics: (nil) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.use_fsync: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_log_file_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.log_file_time_to_roll: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.keep_log_file_num: 1000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.recycle_log_file_num: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.allow_fallocate: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.allow_mmap_reads: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.allow_mmap_writes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.use_direct_reads: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.create_missing_column_families: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.db_log_dir: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.wal_dir: db.wal Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_cache_numshardbits: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.advise_random_on_open: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.db_write_buffer_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_manager: 0x55bae83dd540 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.use_adaptive_mutex: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.rate_limiter: (nil) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.wal_recovery_mode: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_thread_tracking: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_pipelined_write: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.unordered_write: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.row_cache: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.wal_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.allow_ingest_behind: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.two_write_queues: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.manual_wal_flush: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.wal_compression: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.atomic_flush: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.persist_stats_to_disk: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.log_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.best_efforts_recovery: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.allow_data_in_errors: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.db_host_id: __hostname__ Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enforce_single_del_contracts: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_background_jobs: 4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_background_compactions: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_subcompactions: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.writable_file_max_buffer_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.delayed_write_rate : 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_total_wal_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.stats_dump_period_sec: 600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.stats_persist_period_sec: 600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_open_files: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bytes_per_sync: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_readahead_size: 2097152 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_background_flushes: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Compression algorithms supported: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kZSTD supported: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kXpressCompression supported: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kBZip2Compression supported: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kLZ4Compression supported: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kZlibCompression supported: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: #011kSnappyCompression supported: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: DMutex implementation: pthread_mutex_t Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae93c9600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae93c9600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae93c9600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae93c9600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae93c9600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae93c9600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae93c9600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83ca2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae93c93c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83cb610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae93c93c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83cb610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.merge_operator: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_filter_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.sst_partitioner_factory: None Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bae93c93c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55bae83cb610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.write_buffer_size: 16777216 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number: 64 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression: LZ4 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression: Disabled Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.num_levels: 7 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.level: 32767 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.enabled: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.arena_block_size: 1048576 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_support: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.bloom_locality: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.max_successive_merges: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.force_consistency_checks: 1 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.ttl: 2592000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_files: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.min_blob_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_size: 268435456 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c8e7abe3-7880-4ba3-835f-2c0d7ddf01eb Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191606705, "job": 1, "event": "recovery_started", "wal_files": [31]} Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191612986, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771573191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c8e7abe3-7880-4ba3-835f-2c0d7ddf01eb", "db_session_id": "CQ94N9TDRUMNQZ7VIMV0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191618705, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771573191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c8e7abe3-7880-4ba3-835f-2c0d7ddf01eb", "db_session_id": "CQ94N9TDRUMNQZ7VIMV0", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191623937, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771573191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c8e7abe3-7880-4ba3-835f-2c0d7ddf01eb", "db_session_id": "CQ94N9TDRUMNQZ7VIMV0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191628727, "job": 1, "event": "recovery_finished"} Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Feb 20 02:39:51 localhost podman[33249]: Feb 20 02:39:51 localhost podman[33249]: 2026-02-20 07:39:51.659943209 +0000 UTC m=+0.080789817 container create cecdf1bdc165435982f7a14d38200c8e38c8efe6269ca1c4fa1de447331c13f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_moore, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, ceph=True, vcs-type=git, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container) Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bae8492380 Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: DB pointer 0x55bae92e5a00 Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _upgrade_super from 4, latest 4 Feb 20 02:39:51 localhost ceph-osd[32921]: bluestore(/var/lib/ceph/osd/ceph-5) _upgrade_super done Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 02:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bae83ca2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bae83ca2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bae83ca2d0#2 capacity: 460.80 MB usag Feb 20 02:39:51 localhost ceph-osd[32921]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Feb 20 02:39:51 localhost ceph-osd[32921]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Feb 20 02:39:51 localhost ceph-osd[32921]: _get_class not permitted to load lua Feb 20 02:39:51 localhost ceph-osd[32921]: _get_class not permitted to load sdk Feb 20 02:39:51 localhost ceph-osd[32921]: _get_class not permitted to load test_remote_reads Feb 20 02:39:51 localhost ceph-osd[32921]: osd.5 0 crush map has features 288232575208783872, adjusting msgr requires for clients Feb 20 02:39:51 localhost ceph-osd[32921]: osd.5 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Feb 20 02:39:51 localhost ceph-osd[32921]: osd.5 0 crush map has features 288232575208783872, adjusting msgr requires for osds Feb 20 02:39:51 localhost ceph-osd[32921]: osd.5 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Feb 20 02:39:51 localhost ceph-osd[32921]: osd.5 0 load_pgs Feb 20 02:39:51 localhost ceph-osd[32921]: osd.5 0 load_pgs opened 0 pgs Feb 20 02:39:51 localhost ceph-osd[32921]: osd.5 0 log_to_monitors true Feb 20 02:39:51 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5[32917]: 2026-02-20T07:39:51.672+0000 7fdc7d689a80 -1 osd.5 0 log_to_monitors true Feb 20 02:39:51 localhost systemd[1]: Started libpod-conmon-cecdf1bdc165435982f7a14d38200c8e38c8efe6269ca1c4fa1de447331c13f1.scope. Feb 20 02:39:51 localhost systemd[1]: Started libcrun container. Feb 20 02:39:51 localhost podman[33249]: 2026-02-20 07:39:51.63454525 +0000 UTC m=+0.055391858 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e636290b8d9bc61bd3c65c232b5040d1b9534872877b5aaa9a5fa98fd4cf0a57/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e636290b8d9bc61bd3c65c232b5040d1b9534872877b5aaa9a5fa98fd4cf0a57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e636290b8d9bc61bd3c65c232b5040d1b9534872877b5aaa9a5fa98fd4cf0a57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:51 localhost podman[33249]: 2026-02-20 07:39:51.772063085 +0000 UTC m=+0.192909693 container init cecdf1bdc165435982f7a14d38200c8e38c8efe6269ca1c4fa1de447331c13f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_moore, build-date=2026-02-09T10:25:24Z, release=1770267347, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.42.2, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 02:39:51 localhost podman[33249]: 2026-02-20 07:39:51.781925434 +0000 UTC m=+0.202772042 container start cecdf1bdc165435982f7a14d38200c8e38c8efe6269ca1c4fa1de447331c13f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_moore, CEPH_POINT_RELEASE=, GIT_CLEAN=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, ceph=True, release=1770267347, io.openshift.tags=rhceph ceph) Feb 20 02:39:51 localhost podman[33249]: 2026-02-20 07:39:51.782201538 +0000 UTC m=+0.203048146 container attach cecdf1bdc165435982f7a14d38200c8e38c8efe6269ca1c4fa1de447331c13f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_moore, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., ceph=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2) Feb 20 02:39:51 localhost ceph-osd[31981]: osd.2 11 state: booting -> active Feb 20 02:39:52 localhost goofy_moore[33479]: { Feb 20 02:39:52 localhost goofy_moore[33479]: "8337af89-0ad5-4d35-920a-9f8a5f21920c": { Feb 20 02:39:52 localhost goofy_moore[33479]: "ceph_fsid": "a8557ee9-b55d-5519-942c-cf8f6172f1d8", Feb 20 02:39:52 localhost goofy_moore[33479]: "device": "/dev/mapper/ceph_vg1-ceph_lv1", Feb 20 02:39:52 localhost goofy_moore[33479]: "osd_id": 5, Feb 20 02:39:52 localhost goofy_moore[33479]: "osd_uuid": "8337af89-0ad5-4d35-920a-9f8a5f21920c", Feb 20 02:39:52 localhost goofy_moore[33479]: "type": "bluestore" Feb 20 02:39:52 localhost goofy_moore[33479]: }, Feb 20 02:39:52 localhost goofy_moore[33479]: "be635a35-706a-471d-ae03-188a9acf1be1": { Feb 20 02:39:52 localhost goofy_moore[33479]: "ceph_fsid": "a8557ee9-b55d-5519-942c-cf8f6172f1d8", Feb 20 02:39:52 localhost goofy_moore[33479]: "device": "/dev/mapper/ceph_vg0-ceph_lv0", Feb 20 02:39:52 localhost goofy_moore[33479]: "osd_id": 2, Feb 20 02:39:52 localhost goofy_moore[33479]: "osd_uuid": "be635a35-706a-471d-ae03-188a9acf1be1", Feb 20 02:39:52 localhost goofy_moore[33479]: "type": "bluestore" Feb 20 02:39:52 localhost goofy_moore[33479]: } Feb 20 02:39:52 localhost goofy_moore[33479]: } Feb 20 02:39:52 localhost systemd[1]: libpod-cecdf1bdc165435982f7a14d38200c8e38c8efe6269ca1c4fa1de447331c13f1.scope: Deactivated successfully. Feb 20 02:39:52 localhost podman[33249]: 2026-02-20 07:39:52.347836439 +0000 UTC m=+0.768683077 container died cecdf1bdc165435982f7a14d38200c8e38c8efe6269ca1c4fa1de447331c13f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_moore, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.buildah.version=1.42.2, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main) Feb 20 02:39:52 localhost systemd[1]: tmp-crun.a9xsbx.mount: Deactivated successfully. Feb 20 02:39:52 localhost systemd[1]: var-lib-containers-storage-overlay-e636290b8d9bc61bd3c65c232b5040d1b9534872877b5aaa9a5fa98fd4cf0a57-merged.mount: Deactivated successfully. Feb 20 02:39:52 localhost podman[33515]: 2026-02-20 07:39:52.443404818 +0000 UTC m=+0.082002085 container remove cecdf1bdc165435982f7a14d38200c8e38c8efe6269ca1c4fa1de447331c13f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_moore, vcs-type=git, com.redhat.component=rhceph-container, release=1770267347, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, version=7, io.buildah.version=1.42.2, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, ceph=True, name=rhceph) Feb 20 02:39:52 localhost systemd[1]: libpod-conmon-cecdf1bdc165435982f7a14d38200c8e38c8efe6269ca1c4fa1de447331c13f1.scope: Deactivated successfully. Feb 20 02:39:52 localhost sshd[33531]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:39:52 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Feb 20 02:39:52 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Feb 20 02:39:52 localhost ceph-osd[32921]: osd.5 0 done with init, starting boot process Feb 20 02:39:52 localhost ceph-osd[32921]: osd.5 0 start_boot Feb 20 02:39:52 localhost ceph-osd[32921]: osd.5 0 maybe_override_options_for_qos osd_max_backfills set to 1 Feb 20 02:39:52 localhost ceph-osd[32921]: osd.5 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Feb 20 02:39:52 localhost ceph-osd[32921]: osd.5 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Feb 20 02:39:52 localhost ceph-osd[32921]: osd.5 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Feb 20 02:39:52 localhost ceph-osd[32921]: osd.5 0 bench count 12288000 bsize 4 KiB Feb 20 02:39:53 localhost ceph-osd[31981]: osd.2 13 crush map has features 288514051259236352, adjusting msgr requires for clients Feb 20 02:39:53 localhost ceph-osd[31981]: osd.2 13 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Feb 20 02:39:53 localhost ceph-osd[31981]: osd.2 13 crush map has features 3314933000852226048, adjusting msgr requires for osds Feb 20 02:39:54 localhost systemd[1]: tmp-crun.Oj6Ep6.mount: Deactivated successfully. Feb 20 02:39:54 localhost podman[33646]: 2026-02-20 07:39:54.074159023 +0000 UTC m=+0.093213598 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, release=1770267347, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.openshift.expose-services=) Feb 20 02:39:54 localhost podman[33646]: 2026-02-20 07:39:54.178844795 +0000 UTC m=+0.197899360 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph) Feb 20 02:39:55 localhost ceph-osd[32921]: osd.5 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 31.699 iops: 8114.932 elapsed_sec: 0.370 Feb 20 02:39:55 localhost ceph-osd[32921]: log_channel(cluster) log [WRN] : OSD bench result of 8114.931779 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Feb 20 02:39:55 localhost ceph-osd[32921]: osd.5 0 waiting for initial osdmap Feb 20 02:39:55 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5[32917]: 2026-02-20T07:39:55.110+0000 7fdc79608640 -1 osd.5 0 waiting for initial osdmap Feb 20 02:39:55 localhost ceph-osd[32921]: osd.5 14 crush map has features 288514051259236352, adjusting msgr requires for clients Feb 20 02:39:55 localhost ceph-osd[32921]: osd.5 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons Feb 20 02:39:55 localhost ceph-osd[32921]: osd.5 14 crush map has features 3314933000852226048, adjusting msgr requires for osds Feb 20 02:39:55 localhost ceph-osd[32921]: osd.5 14 check_osdmap_features require_osd_release unknown -> reef Feb 20 02:39:55 localhost ceph-osd[32921]: osd.5 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Feb 20 02:39:55 localhost ceph-osd[32921]: osd.5 14 set_numa_affinity not setting numa affinity Feb 20 02:39:55 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-5[32917]: 2026-02-20T07:39:55.129+0000 7fdc74c32640 -1 osd.5 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Feb 20 02:39:55 localhost ceph-osd[32921]: osd.5 14 _collect_metadata loop4: no unique device id for loop4: fallback method has no model nor serial Feb 20 02:39:55 localhost ceph-osd[32921]: osd.5 15 state: booting -> active Feb 20 02:39:56 localhost podman[33844]: Feb 20 02:39:56 localhost podman[33844]: 2026-02-20 07:39:56.115376234 +0000 UTC m=+0.068118653 container create d3e731d81aad809068a76830208130841c2bc7a2d379fba29c3e3877af2159f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_bose, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, architecture=x86_64, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, name=rhceph, ceph=True, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc.) Feb 20 02:39:56 localhost systemd[1]: Started libpod-conmon-d3e731d81aad809068a76830208130841c2bc7a2d379fba29c3e3877af2159f7.scope. Feb 20 02:39:56 localhost systemd[1]: Started libcrun container. Feb 20 02:39:56 localhost podman[33844]: 2026-02-20 07:39:56.185557984 +0000 UTC m=+0.138300403 container init d3e731d81aad809068a76830208130841c2bc7a2d379fba29c3e3877af2159f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_bose, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, version=7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True) Feb 20 02:39:56 localhost podman[33844]: 2026-02-20 07:39:56.090120202 +0000 UTC m=+0.042862651 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:56 localhost podman[33844]: 2026-02-20 07:39:56.195011414 +0000 UTC m=+0.147753833 container start d3e731d81aad809068a76830208130841c2bc7a2d379fba29c3e3877af2159f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_bose, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.42.2, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, architecture=x86_64, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 02:39:56 localhost podman[33844]: 2026-02-20 07:39:56.195750569 +0000 UTC m=+0.148493008 container attach d3e731d81aad809068a76830208130841c2bc7a2d379fba29c3e3877af2159f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_bose, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=1770267347, com.redhat.component=rhceph-container, architecture=x86_64, GIT_CLEAN=True, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Feb 20 02:39:56 localhost tender_bose[33859]: 167 167 Feb 20 02:39:56 localhost systemd[1]: libpod-d3e731d81aad809068a76830208130841c2bc7a2d379fba29c3e3877af2159f7.scope: Deactivated successfully. Feb 20 02:39:56 localhost podman[33844]: 2026-02-20 07:39:56.201944544 +0000 UTC m=+0.154686993 container died d3e731d81aad809068a76830208130841c2bc7a2d379fba29c3e3877af2159f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_bose, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, GIT_CLEAN=True, ceph=True) Feb 20 02:39:56 localhost systemd[1]: var-lib-containers-storage-overlay-7e335a71f95b8672a340ef8847e873e617d648c629d4d580453c88aef0f30984-merged.mount: Deactivated successfully. Feb 20 02:39:56 localhost podman[33864]: 2026-02-20 07:39:56.29303858 +0000 UTC m=+0.080909462 container remove d3e731d81aad809068a76830208130841c2bc7a2d379fba29c3e3877af2159f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_bose, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, release=1770267347, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 02:39:56 localhost systemd[1]: libpod-conmon-d3e731d81aad809068a76830208130841c2bc7a2d379fba29c3e3877af2159f7.scope: Deactivated successfully. Feb 20 02:39:56 localhost podman[33884]: Feb 20 02:39:56 localhost podman[33884]: 2026-02-20 07:39:56.494718329 +0000 UTC m=+0.070459984 container create 0a82513d36bc8117a0120ebca688837ec42c4ed98c9b6338aa83e9b64d8f8e18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_bell, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., version=7, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.42.2, vcs-type=git, ceph=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, com.redhat.component=rhceph-container, GIT_CLEAN=True) Feb 20 02:39:56 localhost systemd[1]: Started libpod-conmon-0a82513d36bc8117a0120ebca688837ec42c4ed98c9b6338aa83e9b64d8f8e18.scope. Feb 20 02:39:56 localhost systemd[1]: Started libcrun container. Feb 20 02:39:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda89c822e9d1308d3c1c6eafbd92d8c5982a74a1c7483e539dadaaf8a689664/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:56 localhost podman[33884]: 2026-02-20 07:39:56.467579117 +0000 UTC m=+0.043320842 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 02:39:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda89c822e9d1308d3c1c6eafbd92d8c5982a74a1c7483e539dadaaf8a689664/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda89c822e9d1308d3c1c6eafbd92d8c5982a74a1c7483e539dadaaf8a689664/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 02:39:56 localhost podman[33884]: 2026-02-20 07:39:56.597302251 +0000 UTC m=+0.173043906 container init 0a82513d36bc8117a0120ebca688837ec42c4ed98c9b6338aa83e9b64d8f8e18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_bell, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, build-date=2026-02-09T10:25:24Z, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, io.buildah.version=1.42.2) Feb 20 02:39:56 localhost podman[33884]: 2026-02-20 07:39:56.607642363 +0000 UTC m=+0.183384028 container start 0a82513d36bc8117a0120ebca688837ec42c4ed98c9b6338aa83e9b64d8f8e18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_bell, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, ceph=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, version=7, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2026-02-09T10:25:24Z, release=1770267347, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True) Feb 20 02:39:56 localhost podman[33884]: 2026-02-20 07:39:56.608963217 +0000 UTC m=+0.184704952 container attach 0a82513d36bc8117a0120ebca688837ec42c4ed98c9b6338aa83e9b64d8f8e18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_bell, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7) Feb 20 02:39:56 localhost ceph-osd[32921]: osd.5 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=15) [1,5,3] r=1 lpr=15 pi=[13,15)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:39:57 localhost systemd[26547]: Starting Mark boot as successful... Feb 20 02:39:57 localhost systemd[26547]: Finished Mark boot as successful. Feb 20 02:39:57 localhost laughing_bell[33900]: [ Feb 20 02:39:57 localhost laughing_bell[33900]: { Feb 20 02:39:57 localhost laughing_bell[33900]: "available": false, Feb 20 02:39:57 localhost laughing_bell[33900]: "ceph_device": false, Feb 20 02:39:57 localhost laughing_bell[33900]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 20 02:39:57 localhost laughing_bell[33900]: "lsm_data": {}, Feb 20 02:39:57 localhost laughing_bell[33900]: "lvs": [], Feb 20 02:39:57 localhost laughing_bell[33900]: "path": "/dev/sr0", Feb 20 02:39:57 localhost laughing_bell[33900]: "rejected_reasons": [ Feb 20 02:39:57 localhost laughing_bell[33900]: "Has a FileSystem", Feb 20 02:39:57 localhost laughing_bell[33900]: "Insufficient space (<5GB)" Feb 20 02:39:57 localhost laughing_bell[33900]: ], Feb 20 02:39:57 localhost laughing_bell[33900]: "sys_api": { Feb 20 02:39:57 localhost laughing_bell[33900]: "actuators": null, Feb 20 02:39:57 localhost laughing_bell[33900]: "device_nodes": "sr0", Feb 20 02:39:57 localhost laughing_bell[33900]: "human_readable_size": "482.00 KB", Feb 20 02:39:57 localhost laughing_bell[33900]: "id_bus": "ata", Feb 20 02:39:57 localhost laughing_bell[33900]: "model": "QEMU DVD-ROM", Feb 20 02:39:57 localhost laughing_bell[33900]: "nr_requests": "2", Feb 20 02:39:57 localhost laughing_bell[33900]: "partitions": {}, Feb 20 02:39:57 localhost laughing_bell[33900]: "path": "/dev/sr0", Feb 20 02:39:57 localhost laughing_bell[33900]: "removable": "1", Feb 20 02:39:57 localhost laughing_bell[33900]: "rev": "2.5+", Feb 20 02:39:57 localhost laughing_bell[33900]: "ro": "0", Feb 20 02:39:57 localhost laughing_bell[33900]: "rotational": "1", Feb 20 02:39:57 localhost laughing_bell[33900]: "sas_address": "", Feb 20 02:39:57 localhost laughing_bell[33900]: "sas_device_handle": "", Feb 20 02:39:57 localhost laughing_bell[33900]: "scheduler_mode": "mq-deadline", Feb 20 02:39:57 localhost laughing_bell[33900]: "sectors": 0, Feb 20 02:39:57 localhost laughing_bell[33900]: "sectorsize": "2048", Feb 20 02:39:57 localhost laughing_bell[33900]: "size": 493568.0, Feb 20 02:39:57 localhost laughing_bell[33900]: "support_discard": "0", Feb 20 02:39:57 localhost laughing_bell[33900]: "type": "disk", Feb 20 02:39:57 localhost laughing_bell[33900]: "vendor": "QEMU" Feb 20 02:39:57 localhost laughing_bell[33900]: } Feb 20 02:39:57 localhost laughing_bell[33900]: } Feb 20 02:39:57 localhost laughing_bell[33900]: ] Feb 20 02:39:57 localhost systemd[1]: libpod-0a82513d36bc8117a0120ebca688837ec42c4ed98c9b6338aa83e9b64d8f8e18.scope: Deactivated successfully. Feb 20 02:39:57 localhost podman[33884]: 2026-02-20 07:39:57.406941215 +0000 UTC m=+0.982682890 container died 0a82513d36bc8117a0120ebca688837ec42c4ed98c9b6338aa83e9b64d8f8e18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_bell, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, name=rhceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True) Feb 20 02:39:57 localhost systemd[1]: var-lib-containers-storage-overlay-bda89c822e9d1308d3c1c6eafbd92d8c5982a74a1c7483e539dadaaf8a689664-merged.mount: Deactivated successfully. Feb 20 02:39:57 localhost podman[35214]: 2026-02-20 07:39:57.543873419 +0000 UTC m=+0.127612591 container remove 0a82513d36bc8117a0120ebca688837ec42c4ed98c9b6338aa83e9b64d8f8e18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_bell, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, vcs-type=git, version=7, RELEASE=main, build-date=2026-02-09T10:25:24Z, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public) Feb 20 02:39:57 localhost systemd[1]: libpod-conmon-0a82513d36bc8117a0120ebca688837ec42c4ed98c9b6338aa83e9b64d8f8e18.scope: Deactivated successfully. Feb 20 02:40:00 localhost sshd[35243]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:40:06 localhost podman[35344]: 2026-02-20 07:40:06.373642763 +0000 UTC m=+0.076899879 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux , version=7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, RELEASE=main, release=1770267347, io.buildah.version=1.42.2, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 02:40:06 localhost podman[35344]: 2026-02-20 07:40:06.477789853 +0000 UTC m=+0.181046919 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=1770267347, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True) Feb 20 02:40:12 localhost sshd[35424]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:40:51 localhost sshd[35426]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:41:04 localhost sshd[35428]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:41:08 localhost podman[35527]: 2026-02-20 07:41:08.257241696 +0000 UTC m=+0.081155845 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, vcs-type=git, build-date=2026-02-09T10:25:24Z, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.42.2, io.openshift.expose-services=, ceph=True, architecture=x86_64, version=7, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 02:41:08 localhost podman[35527]: 2026-02-20 07:41:08.385833648 +0000 UTC m=+0.209747747 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1770267347, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 02:41:09 localhost sshd[35636]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:41:17 localhost sshd[35669]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:41:18 localhost systemd[1]: session-13.scope: Deactivated successfully. Feb 20 02:41:18 localhost systemd[1]: session-13.scope: Consumed 21.515s CPU time. Feb 20 02:41:18 localhost systemd-logind[760]: Session 13 logged out. Waiting for processes to exit. Feb 20 02:41:18 localhost systemd-logind[760]: Removed session 13. Feb 20 02:41:23 localhost sshd[35671]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:41:25 localhost sshd[35673]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:41:38 localhost sshd[35675]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:42:04 localhost sshd[35677]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:42:12 localhost sshd[35756]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:42:30 localhost sshd[35758]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:43:03 localhost sshd[35762]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:43:09 localhost sshd[35764]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:43:22 localhost systemd[26547]: Created slice User Background Tasks Slice. Feb 20 02:43:22 localhost systemd[26547]: Starting Cleanup of User's Temporary Files and Directories... Feb 20 02:43:22 localhost systemd[26547]: Finished Cleanup of User's Temporary Files and Directories. Feb 20 02:43:23 localhost sshd[35843]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:43:36 localhost sshd[35845]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:43:44 localhost sshd[35847]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:43:47 localhost sshd[35849]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:44:10 localhost sshd[35851]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:44:24 localhost sshd[35930]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:44:37 localhost sshd[35932]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:44:45 localhost sshd[35934]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:44:49 localhost sshd[35936]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:44:49 localhost systemd-logind[760]: New session 27 of user zuul. Feb 20 02:44:49 localhost systemd[1]: Started Session 27 of User zuul. Feb 20 02:44:50 localhost python3[35984]: ansible-ansible.legacy.ping Invoked with data=pong Feb 20 02:44:51 localhost python3[36029]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 02:44:51 localhost python3[36049]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005625202.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Feb 20 02:44:52 localhost python3[36105]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:44:52 localhost python3[36148]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771573491.7693934-66443-132209530030321/source _original_basename=tmpp0nlk9pr follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:44:52 localhost python3[36178]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:44:53 localhost python3[36194]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:44:53 localhost python3[36210]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:44:54 localhost python3[36226]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:44:54 localhost sshd[36227]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:44:55 localhost python3[36242]: ansible-ping Invoked with data=pong Feb 20 02:44:57 localhost sshd[36243]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:45:06 localhost sshd[36245]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:45:06 localhost systemd-logind[760]: New session 28 of user tripleo-admin. Feb 20 02:45:06 localhost systemd[1]: Created slice User Slice of UID 1003. Feb 20 02:45:06 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Feb 20 02:45:06 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Feb 20 02:45:06 localhost systemd[1]: Starting User Manager for UID 1003... Feb 20 02:45:06 localhost systemd[36249]: Queued start job for default target Main User Target. Feb 20 02:45:06 localhost systemd[36249]: Created slice User Application Slice. Feb 20 02:45:06 localhost systemd[36249]: Started Mark boot as successful after the user session has run 2 minutes. Feb 20 02:45:06 localhost systemd[36249]: Started Daily Cleanup of User's Temporary Directories. Feb 20 02:45:06 localhost systemd[36249]: Reached target Paths. Feb 20 02:45:06 localhost systemd[36249]: Reached target Timers. Feb 20 02:45:06 localhost systemd[36249]: Starting D-Bus User Message Bus Socket... Feb 20 02:45:06 localhost systemd[36249]: Starting Create User's Volatile Files and Directories... Feb 20 02:45:06 localhost systemd[36249]: Finished Create User's Volatile Files and Directories. Feb 20 02:45:06 localhost systemd[36249]: Listening on D-Bus User Message Bus Socket. Feb 20 02:45:06 localhost systemd[36249]: Reached target Sockets. Feb 20 02:45:06 localhost systemd[36249]: Reached target Basic System. Feb 20 02:45:06 localhost systemd[36249]: Reached target Main User Target. Feb 20 02:45:06 localhost systemd[36249]: Startup finished in 126ms. Feb 20 02:45:06 localhost systemd[1]: Started User Manager for UID 1003. Feb 20 02:45:06 localhost systemd[1]: Started Session 28 of User tripleo-admin. Feb 20 02:45:07 localhost python3[36311]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Feb 20 02:45:12 localhost python3[36331]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config Feb 20 02:45:12 localhost python3[36347]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Feb 20 02:45:13 localhost python3[36395]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.92jt54i2tmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:45:13 localhost python3[36425]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.92jt54i2tmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:45:14 localhost python3[36471]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.92jt54i2tmphosts insertbefore=BOF block=172.17.0.106 np0005625202.localdomain np0005625202#012172.18.0.106 np0005625202.storage.localdomain np0005625202.storage#012172.20.0.106 np0005625202.storagemgmt.localdomain np0005625202.storagemgmt#012172.17.0.106 np0005625202.internalapi.localdomain np0005625202.internalapi#012172.19.0.106 np0005625202.tenant.localdomain np0005625202.tenant#012192.168.122.106 np0005625202.ctlplane.localdomain np0005625202.ctlplane#012172.17.0.107 np0005625203.localdomain np0005625203#012172.18.0.107 np0005625203.storage.localdomain np0005625203.storage#012172.20.0.107 np0005625203.storagemgmt.localdomain np0005625203.storagemgmt#012172.17.0.107 np0005625203.internalapi.localdomain np0005625203.internalapi#012172.19.0.107 np0005625203.tenant.localdomain np0005625203.tenant#012192.168.122.107 np0005625203.ctlplane.localdomain np0005625203.ctlplane#012172.17.0.108 np0005625204.localdomain np0005625204#012172.18.0.108 np0005625204.storage.localdomain np0005625204.storage#012172.20.0.108 np0005625204.storagemgmt.localdomain np0005625204.storagemgmt#012172.17.0.108 np0005625204.internalapi.localdomain np0005625204.internalapi#012172.19.0.108 np0005625204.tenant.localdomain np0005625204.tenant#012192.168.122.108 np0005625204.ctlplane.localdomain np0005625204.ctlplane#012172.17.0.103 np0005625199.localdomain np0005625199#012172.18.0.103 np0005625199.storage.localdomain np0005625199.storage#012172.20.0.103 np0005625199.storagemgmt.localdomain np0005625199.storagemgmt#012172.17.0.103 np0005625199.internalapi.localdomain np0005625199.internalapi#012172.19.0.103 np0005625199.tenant.localdomain np0005625199.tenant#012192.168.122.103 np0005625199.ctlplane.localdomain np0005625199.ctlplane#012172.17.0.104 np0005625200.localdomain np0005625200#012172.18.0.104 np0005625200.storage.localdomain np0005625200.storage#012172.20.0.104 np0005625200.storagemgmt.localdomain np0005625200.storagemgmt#012172.17.0.104 np0005625200.internalapi.localdomain np0005625200.internalapi#012172.19.0.104 np0005625200.tenant.localdomain np0005625200.tenant#012192.168.122.104 np0005625200.ctlplane.localdomain np0005625200.ctlplane#012172.17.0.105 np0005625201.localdomain np0005625201#012172.18.0.105 np0005625201.storage.localdomain np0005625201.storage#012172.20.0.105 np0005625201.storagemgmt.localdomain np0005625201.storagemgmt#012172.17.0.105 np0005625201.internalapi.localdomain np0005625201.internalapi#012172.19.0.105 np0005625201.tenant.localdomain np0005625201.tenant#012192.168.122.105 np0005625201.ctlplane.localdomain np0005625201.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99 overcloud.ctlplane.localdomain#012172.18.0.217 overcloud.storage.localdomain#012172.20.0.250 overcloud.storagemgmt.localdomain#012172.17.0.130 overcloud.internalapi.localdomain#012172.21.0.142 overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:45:15 localhost python3[36520]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.92jt54i2tmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:45:15 localhost python3[36543]: ansible-file Invoked with path=/tmp/ansible.92jt54i2tmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:45:16 localhost python3[36568]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:45:17 localhost python3[36585]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:45:19 localhost sshd[36587]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:45:21 localhost python3[36606]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:45:22 localhost python3[36623]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:45:26 localhost sshd[36628]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:45:48 localhost sshd[38177]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:46:07 localhost sshd[38271]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:46:11 localhost sshd[38294]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:46:31 localhost kernel: SELinux: Converting 2700 SID table entries... Feb 20 02:46:31 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 02:46:31 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 02:46:31 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 02:46:31 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 02:46:31 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 02:46:31 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 02:46:31 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 02:46:31 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=6 res=1 Feb 20 02:46:32 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 02:46:32 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 02:46:32 localhost systemd[1]: Reloading. Feb 20 02:46:32 localhost systemd-rc-local-generator[38544]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:46:32 localhost systemd-sysv-generator[38553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:46:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:46:32 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 20 02:46:32 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 02:46:32 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 02:46:32 localhost systemd[1]: run-ra666dfc31da440239821c6dec749eef7.service: Deactivated successfully. Feb 20 02:46:33 localhost python3[38993]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:46:35 localhost python3[39132]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:46:35 localhost systemd[1]: Reloading. Feb 20 02:46:35 localhost systemd-rc-local-generator[39157]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:46:35 localhost systemd-sysv-generator[39161]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:46:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:46:37 localhost python3[39186]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:46:37 localhost python3[39202]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:46:38 localhost python3[39219]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 20 02:46:38 localhost python3[39237]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:46:39 localhost python3[39255]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:46:39 localhost python3[39273]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 02:46:39 localhost systemd[1]: Reloading Network Manager... Feb 20 02:46:39 localhost NetworkManager[5967]: [1771573599.6977] audit: op="reload" arg="0" pid=39276 uid=0 result="success" Feb 20 02:46:39 localhost NetworkManager[5967]: [1771573599.6986] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf)) Feb 20 02:46:39 localhost NetworkManager[5967]: [1771573599.6987] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged Feb 20 02:46:39 localhost systemd[1]: Reloaded Network Manager. Feb 20 02:46:40 localhost python3[39292]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:46:40 localhost python3[39309]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:46:41 localhost python3[39327]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:46:41 localhost python3[39343]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:46:42 localhost python3[39359]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None Feb 20 02:46:42 localhost python3[39375]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:46:43 localhost python3[39391]: ansible-blockinfile Invoked with path=/tmp/ansible.7d9_ahai block=[192.168.122.106]*,[np0005625202.ctlplane.localdomain]*,[172.17.0.106]*,[np0005625202.internalapi.localdomain]*,[172.18.0.106]*,[np0005625202.storage.localdomain]*,[172.20.0.106]*,[np0005625202.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005625202.tenant.localdomain]*,[np0005625202.localdomain]*,[np0005625202]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDr8sejencX7nSCX6AegGtTuiZL3yclu/L7ZVN4B6dKPdmHqVr33QJD40sEk28GHpx8BrkPU2Qj1de9H6mGtrlwhmJr7Pccg/YqzKoTCQD5rZQ4youU8H70As6YX5ZlXyulwI1SH70XjMm37x4ptKALFOjRnHg0WIXah/tAmzrY/orh+/eCcns7APVjN9B1o+MqP4r47WrWrGU/KxtsHc6dflWxZW7BWUCCNS0e3C4yWLRjy8Hhj7Qkpssv/UBcj+olVHadUUOYiaQZ5Y33MjxwIg8o1MuC7C1dNIn8eXOXXiA8jd/lJd9kImrCGUtkVqj8VQgsMh4vRYMD+0SNLYRDVwxdemOzJYgwQhgiWZ0G+cVhnTBpMmXyIws2OpOKU8R3HjTC3jz+BxvjwEvMDoQfpGgsHB9NCXnkQzs2F8EA8LpA823Ef1SMgPdDCaQzvN5oQPZkWAPMVHvq31xpN9q+KXg/bg0uDaIZXUxW2rGnem7pFS78rRUGL6MfSMn1zs=#012[192.168.122.107]*,[np0005625203.ctlplane.localdomain]*,[172.17.0.107]*,[np0005625203.internalapi.localdomain]*,[172.18.0.107]*,[np0005625203.storage.localdomain]*,[172.20.0.107]*,[np0005625203.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005625203.tenant.localdomain]*,[np0005625203.localdomain]*,[np0005625203]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtf1NXQ3EGQGdpLLLxuODKBdTGwqsiHL2QZ6zcfpGAa7EhDIxuEcLboqOGjQO0FM3u+kl2gIgKF0UsY5Vjcv4mDCMp7A7srq7TVo5lE5cCppbbXr0/PH2L/naHU3W+W83aT5RE17XPJ0Acn3W51WFBoICCCc4jjWTGmkNEgurKBJmdr0n8NeIcUWZ7Abrs/N2xzNftEFIjAPwebxgEwgCx0hMbdjTFhKbB/V7CjKaCU/UjirWMW5aDQJQEfrCM9u4NHuGaWKzJgar4/shNHaRvkCDbVrRPTCyfNebE04J/R42X3yWmvww4TMZVpRROd/u6Pgg1P2tbPGfQ0XvS0rfY6W4/VnHcyRDqxILH5BoeCAbTuVFmR0hbQu9fNbNxTP+o+na9mHEbNxbhcREnkal8+M0l11YftCRkr4132JITxe7y93gN/dwxE3nJLHLXRuRskWc3GTDT2MVU2Sj64yizD9KOM3oiMBXdPbNbgZywu3hqQvpO00GVg6QRjEJoiFc=#012[192.168.122.108]*,[np0005625204.ctlplane.localdomain]*,[172.17.0.108]*,[np0005625204.internalapi.localdomain]*,[172.18.0.108]*,[np0005625204.storage.localdomain]*,[172.20.0.108]*,[np0005625204.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005625204.tenant.localdomain]*,[np0005625204.localdomain]*,[np0005625204]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAo6exxFtNk/Y5qEGYenJyhnCsS7iZmCGsFaQtJElNSeTTX9a1P0P2EmjtHolRxnljCZ2X8HgWx/irhJvWLoS+dzF5l+KcyQy83+048h51mbnj7zV2uG9i8LkO0egs1uBBp5E+hauHMsuf0nIDFl45W86ZXuf+MfFEKCInhjB5gfE9tTjwmKwKhgO1DE7Vpx3OYy1FHkq0YDBCqQHuuhYPrLZPjfVv3vGOaHH/XCsxX3h8/ixsZbobD56dDBKF/8CFyC/guH8pNUhZHG0dEhz5BT8PcE2Q/M9pPttzmRQksfg9+q7lVy9eCoOVpzqfTgjE1cm5yISwuMZzaNxwjJKB54EWpfl5xxnkC14B+xdvowxpl1PcMNZ0q1fWofJF4TrJAwWCUYZf45aUV2yb5R8WavUT0pX32xmd4zFbXusoafiw2FcgnxoGz3N4ZgIxTPPmgUe13blr1SK44huXWPioaolFBo82xVVFHc+01vfLF3xvs86d6EpqpLH+yaCeUjE=#012[192.168.122.103]*,[np0005625199.ctlplane.localdomain]*,[172.17.0.103]*,[np0005625199.internalapi.localdomain]*,[172.18.0.103]*,[np0005625199.storage.localdomain]*,[172.20.0.103]*,[np0005625199.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005625199.tenant.localdomain]*,[np0005625199.localdomain]*,[np0005625199]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrnsozeOPJKYg9sx2Tj6QOLRhujK5RVh5RZQ3sb0pk+DbWHQKqS1YvJUg2hV4WxbxPnNUCBtJ+RZ8lVm6RLM+hc3ffe2sOMOz5upO/hTlIpBSfJpQORkiNW+XIXdDVxgE418veFd2hASFmiCmKoFSKXsvnmFU9oTEpja1plcXSqCobFMVYKlhcRo66O0ySlGOR+o3Ar2yNJQjFErEGvZLoDEa/VlA6zreYmTaIsnlUDie0gbm5teTlsCcEYkvWcTzcfOEX2kXQRQbS5qlPtGg7c+KMv5e40rE+2QOigLmOOPVGwNYuLuhb/EHT0C8hK8otW4tiXxBlSZ5ONKY6YYQOpy7krNkWRxNXzK0LfXo2bt2apDaMzebPOvuBj1YyBiLpa6/aLvS/dtGolQNPDpFivPbP/mSpat1qTs0W3/2HyBovwWSGJDW8MMYxbZJ0Z6tnuOwdrPTdkhIibfW9wxgL7EHrDYrGx5CvA2vUM4KDKRntz/cCMGE/zKacSJ48nNk=#012[192.168.122.104]*,[np0005625200.ctlplane.localdomain]*,[172.17.0.104]*,[np0005625200.internalapi.localdomain]*,[172.18.0.104]*,[np0005625200.storage.localdomain]*,[172.20.0.104]*,[np0005625200.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005625200.tenant.localdomain]*,[np0005625200.localdomain]*,[np0005625200]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDW88346W6zU6nxCpqapHtIr5nRG8Jn9LFit3r5klBfauCkmAGONb4X8IwKjo8MD9etebUVbo6aX9gBMBMSs7bSoHzsEQuMLpBDrweSbahQj+gqZ5TmQ/xvwbhws04z3/IJxapAk2xWu7khVGjvOPUE1CROkP+1LiGktQ6Xj1ar1TbLNud2Dq/R5ZalbpK0OT3+no3x0oAJT3W649tW4nmCWcNaxykPsLREsUlH2qVoceAzLEDCSde9/1TONc/URyB4acVqmEwJDHeX51bh31tpQwp/WSe0vKQ6eUw63Tmpn+dRI9xbnFhc6mgGAPcEw7cAUkM7oM6bYMSvVxYDmzMhuXUU/9i3mdMnDBkMyZ5Oed6ZSmFQIJe5k7cz3783d35ZXfl/HsYMqoZ3lmDgbeS59pQrI+BldKyv3sTnoCDahfcmzmiHssxqa7tT5KOuR444q7Nj6wJEIZMEEJEHtMlh1iSBRJZOEOaKjo7h+jV7KMe75aPRasvu9K1v0dqyG6U=#012[192.168.122.105]*,[np0005625201.ctlplane.localdomain]*,[172.17.0.105]*,[np0005625201.internalapi.localdomain]*,[172.18.0.105]*,[np0005625201.storage.localdomain]*,[172.20.0.105]*,[np0005625201.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005625201.tenant.localdomain]*,[np0005625201.localdomain]*,[np0005625201]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyGkX26ECIsvqnvJegedSF6KicDAAqjaifawEd//OuK9zdHIWqO3XmlEszZqWPsdQhPFkelfzXR+sy3gbPNv+yjT7phsw1sq7zHXeogQFlP5iOQZrf6hCnfXxVk2ckIXMT0UJVZ8FCTwsQi+HKkR/IEj08pR7EjrXGWxHkjv5wNj76spF3FJxtwycS4+KzY3UFy7gYWVn2jB0ha966YgjHMPhzQnT33W9myxGH33M1L5ZCGlfH19hLnqTUNMfzIfw3afxHkL5BFZbhthUPmIfLdLtKmZEkpSTBO/CrNA6CmMfY6xnT78hmwXytEQ+jeiRdKXdr9xQ2j6wVmPzckFKBsBYRe4DprKGt93fnKS9Z6A3Sv626DyZgDa8/NXbtAaBxtyix5Vdt872hYvCzYyB/OuSV6PR5DOq8z3fquOwgtka3rA6qL5gxhFJcO5TqtBM76DzOLd9OLM9bIO1yK9sCmbYynMojkXylzhDfcI8kytS5xs9FJEfwTElZRHkEIQE=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:46:43 localhost python3[39407]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.7d9_ahai' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:46:44 localhost python3[39425]: ansible-file Invoked with path=/tmp/ansible.7d9_ahai state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:46:45 localhost python3[39441]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 02:46:45 localhost sshd[39442]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:46:45 localhost python3[39459]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:46:46 localhost sshd[39462]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:46:46 localhost sshd[39464]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:46:46 localhost python3[39480]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:46:47 localhost python3[39500]: ansible-community.general.cloud_init_data_facts Invoked with filter=status Feb 20 02:46:49 localhost python3[39637]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:46:49 localhost python3[39654]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:46:52 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Feb 20 02:46:52 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Feb 20 02:46:52 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 02:46:52 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 02:46:52 localhost systemd[1]: Reloading. Feb 20 02:46:53 localhost systemd-rc-local-generator[39714]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:46:53 localhost systemd-sysv-generator[39719]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:46:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:46:53 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 20 02:46:53 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Feb 20 02:46:53 localhost systemd[1]: tuned.service: Deactivated successfully. Feb 20 02:46:53 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Feb 20 02:46:53 localhost systemd[1]: tuned.service: Consumed 1.673s CPU time. Feb 20 02:46:53 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Feb 20 02:46:53 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 02:46:53 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 02:46:53 localhost systemd[1]: run-r57f936abe70e417f9f16c0998e3bb160.service: Deactivated successfully. Feb 20 02:46:54 localhost systemd[1]: Started Dynamic System Tuning Daemon. Feb 20 02:46:54 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 02:46:54 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 02:46:54 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 02:46:54 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 02:46:54 localhost systemd[1]: run-re41bf73d9b6d4ef790321573b3d096b4.service: Deactivated successfully. Feb 20 02:46:55 localhost python3[40091]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:46:55 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Feb 20 02:46:56 localhost systemd[1]: tuned.service: Deactivated successfully. Feb 20 02:46:56 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Feb 20 02:46:56 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Feb 20 02:46:57 localhost systemd[1]: Started Dynamic System Tuning Daemon. Feb 20 02:46:57 localhost python3[40286]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:46:58 localhost python3[40303]: ansible-slurp Invoked with src=/etc/tuned/active_profile Feb 20 02:46:58 localhost python3[40319]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:46:59 localhost python3[40335]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:47:00 localhost python3[40355]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:47:01 localhost python3[40372]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:47:03 localhost python3[40388]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:09 localhost python3[40404]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:09 localhost python3[40452]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:10 localhost python3[40497]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573629.4901607-70977-224237717496757/source _original_basename=tmp1awqb54f follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:10 localhost python3[40527]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:11 localhost python3[40575]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:11 localhost python3[40618]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573631.0805807-71083-206411477995713/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=5387ef5e5a4b3d23a203db65b8a130e906dc0536 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:12 localhost python3[40680]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:12 localhost python3[40723]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573631.9848113-71143-258192569555024/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=b3e2a3c34ad78c32d8298bcfb96fa0bd48de4c29 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:13 localhost python3[40785]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:13 localhost python3[40828]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573632.8716998-71143-242155769859724/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=9360c8b01c30dc9677a403a9f11e562b9309fb54 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:14 localhost python3[40890]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:14 localhost python3[40933]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573633.7777505-71143-27424634562869/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=68b5a56a66cb10764ef3288009ad5e9b7e8faf12 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:15 localhost python3[40995]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:15 localhost python3[41038]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573634.7955718-71143-158797200779776/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:15 localhost python3[41100]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:16 localhost python3[41143]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573635.6245496-71143-240966908447842/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=231cec7f0a750f648786c92b5d135ad78624eb14 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:16 localhost python3[41205]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:17 localhost python3[41248]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573636.5061026-71143-266904874029247/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:17 localhost python3[41310]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:17 localhost python3[41383]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573637.341571-71143-187074707240493/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=105f529004e67673ca4edd886c338642e88dedf6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:18 localhost python3[41478]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:18 localhost python3[41521]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573638.1349652-71143-226064926275952/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:19 localhost python3[41598]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:19 localhost python3[41641]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573638.9459925-71143-200763047922129/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:20 localhost python3[41703]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:20 localhost python3[41746]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573639.815246-71143-113363684661893/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=8ccaf9e0d43c223722f20aa2895ef10bd75a5f13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:21 localhost python3[41776]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:47:22 localhost python3[41824]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:47:22 localhost systemd[36249]: Starting Mark boot as successful... Feb 20 02:47:22 localhost systemd[36249]: Finished Mark boot as successful. Feb 20 02:47:22 localhost python3[41867]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573641.8155923-71993-153181378534255/source _original_basename=tmpe6yuwap8 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:47:25 localhost sshd[41883]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:47:26 localhost python3[41900]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 02:47:27 localhost python3[41961]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:47:32 localhost python3[41978]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:47:34 localhost sshd[41980]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:47:37 localhost python3[41997]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:47:37 localhost python3[42020]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:47:38 localhost python3[42043]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:47:39 localhost python3[42066]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:47:39 localhost python3[42089]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:47:43 localhost sshd[42097]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:47:54 localhost sshd[42099]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:48:04 localhost sshd[42101]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:48:15 localhost sshd[42103]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:48:21 localhost python3[42182]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:21 localhost python3[42230]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:21 localhost python3[42248]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmp5k3wwurn recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:22 localhost python3[42293]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:22 localhost python3[42341]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:23 localhost python3[42359]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:23 localhost python3[42421]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:23 localhost python3[42439]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:24 localhost python3[42501]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:24 localhost python3[42519]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:25 localhost python3[42581]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:25 localhost python3[42599]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:25 localhost python3[42661]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:26 localhost python3[42679]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:26 localhost sshd[42726]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:48:26 localhost python3[42743]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:26 localhost python3[42761]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:27 localhost python3[42823]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:27 localhost python3[42841]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:28 localhost python3[42903]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:28 localhost python3[42921]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:28 localhost python3[42983]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:29 localhost python3[43001]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:29 localhost python3[43063]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:29 localhost python3[43081]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:30 localhost python3[43143]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:30 localhost python3[43161]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:31 localhost python3[43191]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:48:31 localhost python3[43239]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:32 localhost python3[43257]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmp3fd17kma recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:36 localhost python3[43287]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:48:39 localhost sshd[43289]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:48:41 localhost python3[43306]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:48:41 localhost python3[43324]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:48:41 localhost python3[43342]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:48:42 localhost systemd[1]: Reloading. Feb 20 02:48:42 localhost systemd-rc-local-generator[43367]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:48:42 localhost systemd-sysv-generator[43370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:48:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:48:42 localhost systemd[1]: Starting Netfilter Tables... Feb 20 02:48:42 localhost systemd[1]: Finished Netfilter Tables. Feb 20 02:48:43 localhost python3[43432]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:43 localhost python3[43475]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573722.739761-74754-262690677517164/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:43 localhost python3[43505]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:48:44 localhost python3[43523]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:48:45 localhost python3[43572]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:45 localhost python3[43615]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573724.6665618-74863-218007600012538/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:45 localhost python3[43677]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:46 localhost python3[43720]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573725.592642-74926-22909942784495/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:46 localhost python3[43782]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:47 localhost python3[43825]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573726.5248337-75108-151540441593768/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:47 localhost python3[43887]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:48 localhost python3[43930]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573727.399342-75191-109591310481145/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:49 localhost python3[43992]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:48:49 localhost python3[44035]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573728.3223462-75250-266680581144597/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:49 localhost python3[44065]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:48:50 localhost python3[44130]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:48:50 localhost python3[44147]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:48:51 localhost python3[44164]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:48:51 localhost python3[44183]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:48:51 localhost python3[44199]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:48:52 localhost python3[44215]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:48:52 localhost python3[44231]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Feb 20 02:48:53 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=7 res=1 Feb 20 02:48:54 localhost python3[44251]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Feb 20 02:48:54 localhost kernel: SELinux: Converting 2704 SID table entries... Feb 20 02:48:54 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 02:48:54 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 02:48:54 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 02:48:54 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 02:48:54 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 02:48:54 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 02:48:54 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 02:48:55 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=8 res=1 Feb 20 02:48:55 localhost python3[44272]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Feb 20 02:48:56 localhost kernel: SELinux: Converting 2704 SID table entries... Feb 20 02:48:56 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 02:48:56 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 02:48:56 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 02:48:56 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 02:48:56 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 02:48:56 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 02:48:56 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 02:48:56 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=9 res=1 Feb 20 02:48:56 localhost python3[44293]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Feb 20 02:48:57 localhost sshd[44296]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:48:57 localhost kernel: SELinux: Converting 2704 SID table entries... Feb 20 02:48:57 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 02:48:57 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 02:48:57 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 02:48:57 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 02:48:57 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 02:48:57 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 02:48:57 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 02:48:57 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=10 res=1 Feb 20 02:48:57 localhost python3[44316]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:48:58 localhost python3[44332]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:48:58 localhost python3[44348]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:48:58 localhost python3[44364]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:48:59 localhost python3[44380]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:49:00 localhost python3[44397]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:49:03 localhost python3[44414]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:04 localhost python3[44462]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:04 localhost python3[44505]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573743.9826958-76057-279518761997172/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:05 localhost python3[44535]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 02:49:05 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 20 02:49:05 localhost systemd[1]: Stopped Load Kernel Modules. Feb 20 02:49:05 localhost systemd[1]: Stopping Load Kernel Modules... Feb 20 02:49:05 localhost systemd[1]: Starting Load Kernel Modules... Feb 20 02:49:05 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 20 02:49:05 localhost kernel: Bridge firewalling registered Feb 20 02:49:05 localhost systemd-modules-load[44538]: Inserted module 'br_netfilter' Feb 20 02:49:05 localhost systemd-modules-load[44538]: Module 'msr' is built in Feb 20 02:49:05 localhost systemd[1]: Finished Load Kernel Modules. Feb 20 02:49:05 localhost sshd[44542]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:49:05 localhost python3[44591]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:06 localhost python3[44634]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573745.4255807-76135-83471776494526/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:06 localhost python3[44664]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:06 localhost python3[44681]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:07 localhost python3[44699]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:07 localhost python3[44717]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:07 localhost python3[44734]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:08 localhost python3[44751]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:08 localhost python3[44768]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:08 localhost python3[44786]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:08 localhost python3[44804]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:09 localhost python3[44822]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:09 localhost python3[44840]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:09 localhost python3[44858]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:10 localhost python3[44876]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:10 localhost python3[44894]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:10 localhost python3[44911]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:11 localhost python3[44928]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:11 localhost python3[44945]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:11 localhost python3[44962]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Feb 20 02:49:12 localhost python3[44980]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 02:49:12 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 20 02:49:12 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 20 02:49:12 localhost systemd[1]: Stopping Apply Kernel Variables... Feb 20 02:49:12 localhost systemd[1]: Starting Apply Kernel Variables... Feb 20 02:49:12 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 20 02:49:12 localhost systemd[1]: Finished Apply Kernel Variables. Feb 20 02:49:12 localhost python3[45000]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:13 localhost python3[45016]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:13 localhost python3[45032]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:13 localhost python3[45048]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:49:14 localhost python3[45064]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:14 localhost sshd[45081]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:49:14 localhost python3[45080]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:14 localhost python3[45098]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:14 localhost python3[45114]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:15 localhost python3[45130]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:15 localhost python3[45178]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:16 localhost python3[45221]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573755.3754723-76513-56305373586130/source _original_basename=tmpk3sn5xyn follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:16 localhost python3[45251]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:49:17 localhost sshd[45253]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:49:17 localhost python3[45269]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:17 localhost python3[45317]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:18 localhost python3[45360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573757.6431026-76752-47292125167249/source _original_basename=tmpzyqc71mv follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:18 localhost python3[45390]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:19 localhost python3[45407]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:19 localhost python3[45423]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:20 localhost python3[45439]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:20 localhost python3[45455]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:20 localhost python3[45471]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:20 localhost python3[45487]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:21 localhost python3[45503]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:21 localhost python3[45519]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:22 localhost python3[45535]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False Feb 20 02:49:22 localhost python3[45587]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005625202.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Feb 20 02:49:23 localhost python3[45631]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None Feb 20 02:49:23 localhost python3[45677]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:49:23 localhost python3[45753]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:24 localhost python3[45816]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573763.5546124-77107-99790089572630/source _original_basename=tmpcedtm338 follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:24 localhost python3[45846]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Feb 20 02:49:25 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=11 res=1 Feb 20 02:49:25 localhost python3[45990]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:26 localhost python3[46006]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:26 localhost python3[46022]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False Feb 20 02:49:27 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=12 res=1 Feb 20 02:49:28 localhost python3[46042]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:49:31 localhost python3[46059]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 02:49:31 localhost python3[46120]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:32 localhost python3[46136]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:49:32 localhost python3[46196]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:33 localhost python3[46239]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573772.3020816-77401-212377449584722/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=28053f82527db8581f82a875a0ad593032bae19a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:33 localhost python3[46301]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:34 localhost python3[46346]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573773.2812576-77429-199217573263781/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:34 localhost python3[46376]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:34 localhost python3[46392]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:35 localhost python3[46408]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:35 localhost python3[46424]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:36 localhost python3[46472]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:36 localhost python3[46515]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573775.7637625-77639-246541883353240/source _original_basename=tmp1m4ia5qj follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:36 localhost python3[46545]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:37 localhost python3[46561]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:37 localhost python3[46577]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:49:41 localhost python3[46626]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:41 localhost python3[46671]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573781.0132437-77864-134000536140809/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:43 localhost python3[46702]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:49:43 localhost systemd[1]: Stopping OpenSSH server daemon... Feb 20 02:49:43 localhost systemd[1]: sshd.service: Deactivated successfully. Feb 20 02:49:43 localhost systemd[1]: Stopped OpenSSH server daemon. Feb 20 02:49:43 localhost systemd[1]: sshd.service: Consumed 7.537s CPU time, read 1.9M from disk, written 328.0K to disk. Feb 20 02:49:43 localhost systemd[1]: Stopped target sshd-keygen.target. Feb 20 02:49:43 localhost systemd[1]: Stopping sshd-keygen.target... Feb 20 02:49:43 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 02:49:43 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 02:49:43 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 02:49:43 localhost systemd[1]: Reached target sshd-keygen.target. Feb 20 02:49:43 localhost systemd[1]: Starting OpenSSH server daemon... Feb 20 02:49:43 localhost sshd[46706]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:49:43 localhost systemd[1]: Started OpenSSH server daemon. Feb 20 02:49:43 localhost python3[46722]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:49:44 localhost python3[46740]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:49:45 localhost python3[46758]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:49:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 02:49:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3257 writes, 16K keys, 3257 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3257 writes, 144 syncs, 22.62 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3257 writes, 16K keys, 3257 commit groups, 1.0 writes per commit group, ingest: 14.65 MB, 0.02 MB/s#012Interval WAL: 3257 writes, 144 syncs, 22.62 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d37d6522d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 9.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d37d6522d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 9.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Feb 20 02:49:48 localhost python3[46807]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:48 localhost sshd[46826]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:49:48 localhost python3[46825]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:49 localhost python3[46857]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:49:50 localhost sshd[46908]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:49:50 localhost python3[46907]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:50 localhost python3[46927]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:50 localhost python3[46957]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:49:50 localhost systemd[1]: Reloading. Feb 20 02:49:51 localhost systemd-sysv-generator[46983]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:49:51 localhost systemd-rc-local-generator[46978]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:49:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:49:51 localhost systemd[1]: Starting chronyd online sources service... Feb 20 02:49:51 localhost chronyc[46997]: 200 OK Feb 20 02:49:51 localhost systemd[1]: chrony-online.service: Deactivated successfully. Feb 20 02:49:51 localhost systemd[1]: Finished chronyd online sources service. Feb 20 02:49:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 02:49:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3387 writes, 16K keys, 3387 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3387 writes, 198 syncs, 17.11 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3387 writes, 16K keys, 3387 commit groups, 1.0 writes per commit group, ingest: 15.26 MB, 0.03 MB/s#012Interval WAL: 3387 writes, 198 syncs, 17.11 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bae83ca2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bae83ca2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Feb 20 02:49:51 localhost python3[47013]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:49:51 localhost chronyd[26327]: System clock was stepped by -0.000084 seconds Feb 20 02:49:52 localhost python3[47030]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:49:52 localhost python3[47047]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:49:52 localhost chronyd[26327]: System clock was stepped by -0.000000 seconds Feb 20 02:49:52 localhost python3[47064]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:49:53 localhost python3[47081]: ansible-timezone Invoked with name=UTC hwclock=None Feb 20 02:49:53 localhost systemd[1]: Starting Time & Date Service... Feb 20 02:49:53 localhost systemd[1]: Started Time & Date Service. Feb 20 02:49:54 localhost python3[47101]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:49:54 localhost python3[47118]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:49:55 localhost python3[47135]: ansible-slurp Invoked with src=/etc/tuned/active_profile Feb 20 02:49:56 localhost python3[47151]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:49:56 localhost python3[47167]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:57 localhost python3[47183]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:49:57 localhost python3[47231]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:58 localhost python3[47274]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573797.4460874-78944-259941440772045/source _original_basename=tmprrwfe8ok follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:58 localhost python3[47336]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:49:58 localhost python3[47379]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573798.2827985-78998-134338990886247/source _original_basename=tmpv1vkhzvo follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:49:59 localhost python3[47409]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 20 02:49:59 localhost systemd[1]: Reloading. Feb 20 02:49:59 localhost systemd-rc-local-generator[47437]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:49:59 localhost systemd-sysv-generator[47441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:49:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:50:00 localhost python3[47463]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:50:00 localhost python3[47479]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:50:00 localhost python3[47496]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:50:00 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully. Feb 20 02:50:01 localhost python3[47513]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:50:01 localhost python3[47529]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:50:02 localhost python3[47577]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:50:02 localhost python3[47620]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573801.874061-79179-16051832279141/source _original_basename=tmpkd91um8v follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:50:17 localhost sshd[47635]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:50:22 localhost systemd[36249]: Created slice User Background Tasks Slice. Feb 20 02:50:22 localhost systemd[36249]: Starting Cleanup of User's Temporary Files and Directories... Feb 20 02:50:22 localhost systemd[36249]: Finished Cleanup of User's Temporary Files and Directories. Feb 20 02:50:23 localhost sshd[47638]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:50:23 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Feb 20 02:50:27 localhost python3[47734]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 20 02:50:28 localhost python3[47750]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None Feb 20 02:50:28 localhost python3[47766]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 02:50:29 localhost python3[47782]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:50:29 localhost python3[47798]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:50:29 localhost python3[47814]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Feb 20 02:50:30 localhost kernel: SELinux: Converting 2707 SID table entries... Feb 20 02:50:30 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 02:50:30 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 02:50:30 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 02:50:30 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 02:50:30 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 02:50:30 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 02:50:30 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 02:50:30 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=13 res=1 Feb 20 02:50:31 localhost python3[47835]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:50:32 localhost python3[47972]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [ Feb 20 02:50:33 localhost rsyslogd[759]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ] Feb 20 02:50:33 localhost sshd[47989]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:50:33 localhost python3[47988]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 02:50:33 localhost python3[48006]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 20 02:50:34 localhost python3[48022]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n -iNONE', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}} Feb 20 02:50:39 localhost python3[48070]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:50:40 localhost python3[48113]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573839.4433165-80835-266176308440672/source _original_basename=tmp96370j_2 follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:50:40 localhost python3[48143]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:50:42 localhost python3[48266]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 02:50:44 localhost python3[48387]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 20 02:50:46 localhost python3[48403]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:50:47 localhost python3[48420]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 02:50:51 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Feb 20 02:50:51 localhost dbus-broker-launch[18460]: Noticed file-system modification, trigger reload. Feb 20 02:50:51 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Feb 20 02:50:51 localhost dbus-broker-launch[18460]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Feb 20 02:50:51 localhost dbus-broker-launch[18460]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Feb 20 02:50:51 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Feb 20 02:50:51 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Feb 20 02:50:51 localhost systemd[1]: Reexecuting. Feb 20 02:50:52 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 20 02:50:52 localhost systemd[1]: Detected virtualization kvm. Feb 20 02:50:52 localhost systemd[1]: Detected architecture x86-64. Feb 20 02:50:52 localhost systemd-rc-local-generator[48475]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:50:52 localhost systemd-sysv-generator[48480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:50:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:50:58 localhost sshd[48495]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:50:58 localhost sshd[48496]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:51:00 localhost kernel: SELinux: Converting 2707 SID table entries... Feb 20 02:51:00 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 02:51:00 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 02:51:00 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 02:51:00 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 02:51:00 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 02:51:00 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 02:51:00 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 02:51:00 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Feb 20 02:51:00 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=14 res=1 Feb 20 02:51:00 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Feb 20 02:51:01 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 02:51:01 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 02:51:01 localhost systemd[1]: Reloading. Feb 20 02:51:01 localhost systemd-rc-local-generator[48587]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:51:01 localhost systemd-sysv-generator[48590]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:51:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:51:01 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 02:51:01 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 20 02:51:01 localhost systemd[1]: Stopping Journal Service... Feb 20 02:51:01 localhost systemd-journald[618]: Received SIGTERM from PID 1 (systemd). Feb 20 02:51:01 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Feb 20 02:51:01 localhost systemd-journald[618]: Journal stopped Feb 20 02:51:02 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Feb 20 02:51:02 localhost systemd[1]: Stopped Journal Service. Feb 20 02:51:02 localhost systemd[1]: systemd-journald.service: Consumed 1.947s CPU time. Feb 20 02:51:02 localhost systemd[1]: Starting Journal Service... Feb 20 02:51:02 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 20 02:51:02 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Feb 20 02:51:02 localhost systemd[1]: systemd-udevd.service: Consumed 3.066s CPU time. Feb 20 02:51:02 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Feb 20 02:51:02 localhost systemd-journald[48906]: Journal started Feb 20 02:51:02 localhost systemd-journald[48906]: Runtime Journal (/run/log/journal/01f46965e72fd8a157841feaa66c8d52) is 12.3M, max 314.7M, 302.4M free. Feb 20 02:51:02 localhost systemd[1]: Started Journal Service. Feb 20 02:51:02 localhost systemd-journald[48906]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Feb 20 02:51:02 localhost systemd-journald[48906]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 20 02:51:02 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 02:51:02 localhost sshd[48938]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:51:02 localhost systemd-udevd[48910]: Using default interface naming scheme 'rhel-9.0'. Feb 20 02:51:02 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 20 02:51:02 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 02:51:02 localhost systemd[1]: Reloading. Feb 20 02:51:02 localhost systemd-rc-local-generator[49488]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:51:02 localhost systemd-sysv-generator[49492]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:51:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:51:02 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 20 02:51:02 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 02:51:02 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 02:51:02 localhost systemd[1]: man-db-cache-update.service: Consumed 1.334s CPU time. Feb 20 02:51:02 localhost systemd[1]: run-r5f353545f5874fde984e1d813fb67f52.service: Deactivated successfully. Feb 20 02:51:02 localhost systemd[1]: run-r8dac03df6b9040e496120c92aa80e9c2.service: Deactivated successfully. Feb 20 02:51:04 localhost python3[49915]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False Feb 20 02:51:04 localhost python3[49934]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 02:51:05 localhost python3[49952]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 20 02:51:05 localhost python3[49952]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json Feb 20 02:51:05 localhost python3[49952]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false Feb 20 02:51:07 localhost sshd[50003]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:51:10 localhost sshd[50018]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:51:12 localhost podman[49965]: 2026-02-20 07:51:05.794787219 +0000 UTC m=+0.048291552 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 20 02:51:12 localhost python3[49952]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 591bb9fb46a70e9f840f28502388406078442df6b6701a3c17990ee75e333673 --format json Feb 20 02:51:13 localhost python3[50071]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 20 02:51:13 localhost python3[50071]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json Feb 20 02:51:13 localhost python3[50071]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false Feb 20 02:51:20 localhost podman[50084]: 2026-02-20 07:51:13.400671494 +0000 UTC m=+0.041715064 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 20 02:51:20 localhost python3[50071]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d59b33e7fb841c47a47a12b18fb68b11debd968b4596c63f3177ecc7400fb1bc --format json Feb 20 02:51:21 localhost python3[50187]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 20 02:51:21 localhost python3[50187]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json Feb 20 02:51:21 localhost python3[50187]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false Feb 20 02:51:29 localhost podman[50409]: 2026-02-20 07:51:29.834498499 +0000 UTC m=+0.093835791 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., ceph=True, version=7, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-type=git, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 02:51:29 localhost podman[50409]: 2026-02-20 07:51:29.963867047 +0000 UTC m=+0.223204339 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, io.buildah.version=1.42.2, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , RELEASE=main, release=1770267347, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7) Feb 20 02:51:32 localhost sshd[50768]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:51:35 localhost podman[50199]: 2026-02-20 07:51:21.149429971 +0000 UTC m=+0.047350243 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 02:51:35 localhost python3[50187]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 6eddd23e1e6adfbfa713a747123707c02f92ffdbf1913da92f171aba1d6d7856 --format json Feb 20 02:51:36 localhost python3[51536]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 20 02:51:36 localhost python3[51536]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json Feb 20 02:51:36 localhost python3[51536]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false Feb 20 02:51:48 localhost podman[51550]: 2026-02-20 07:51:36.21615868 +0000 UTC m=+0.044071760 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 20 02:51:48 localhost python3[51536]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 2c8610235afe953aa46efb141a5a988799548b22280d65a7e7ab21889422df37 --format json Feb 20 02:51:49 localhost python3[51786]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 20 02:51:49 localhost python3[51786]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json Feb 20 02:51:49 localhost python3[51786]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false Feb 20 02:51:52 localhost sshd[51837]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:51:52 localhost sshd[51839]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:51:57 localhost podman[51799]: 2026-02-20 07:51:49.446109226 +0000 UTC m=+0.025060446 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Feb 20 02:51:57 localhost python3[51786]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 9ab5aab6d0c3ec80926032b7acf4cec1d4710f1c2daccd17ae4daa64399ec237 --format json Feb 20 02:51:57 localhost python3[51894]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 20 02:51:57 localhost python3[51894]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json Feb 20 02:51:57 localhost python3[51894]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false Feb 20 02:52:02 localhost podman[51908]: 2026-02-20 07:51:57.834407637 +0000 UTC m=+0.032361392 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 20 02:52:02 localhost python3[51894]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 4853142d85dba3766b28d28ae195b26f7242230fe3646e9590a7aee2dc2e0dfa --format json Feb 20 02:52:02 localhost python3[51984]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 20 02:52:02 localhost python3[51984]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json Feb 20 02:52:02 localhost python3[51984]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false Feb 20 02:52:05 localhost podman[51996]: 2026-02-20 07:52:02.943175392 +0000 UTC m=+0.049972596 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 20 02:52:05 localhost python3[51984]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 9ac6ea63c0fb4851145e847f9ced2f20804afc8472907b63a82d5866f5cf608a --format json Feb 20 02:52:05 localhost python3[52075]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 20 02:52:05 localhost python3[52075]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json Feb 20 02:52:05 localhost python3[52075]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false Feb 20 02:52:06 localhost sshd[52101]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:52:07 localhost podman[52088]: 2026-02-20 07:52:05.652793081 +0000 UTC m=+0.042086335 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 20 02:52:07 localhost python3[52075]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect ba1a08ea1c1207b471b1f02cee16ff456b8a812662cce16906d16de330a66d63 --format json Feb 20 02:52:09 localhost python3[52166]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 20 02:52:09 localhost python3[52166]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json Feb 20 02:52:09 localhost python3[52166]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false Feb 20 02:52:12 localhost podman[52178]: 2026-02-20 07:52:09.71691247 +0000 UTC m=+0.033978856 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Feb 20 02:52:12 localhost python3[52166]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 8576d3a17e57ea28f29435f132f583320941b5aa7bf0aa02e998b09a094d1fe8 --format json Feb 20 02:52:12 localhost python3[52256]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 20 02:52:12 localhost python3[52256]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json Feb 20 02:52:12 localhost python3[52256]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false Feb 20 02:52:16 localhost podman[52268]: 2026-02-20 07:52:12.687873249 +0000 UTC m=+0.042717352 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 20 02:52:16 localhost python3[52256]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 7fcbf63c0504494c8fcaa07583f909a06486472a0982aeac9554c6fdbeb04c9a --format json Feb 20 02:52:16 localhost python3[52358]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Feb 20 02:52:16 localhost python3[52358]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json Feb 20 02:52:16 localhost python3[52358]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false Feb 20 02:52:18 localhost podman[52371]: 2026-02-20 07:52:16.971218246 +0000 UTC m=+0.041592473 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 20 02:52:19 localhost python3[52358]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 72ddf109f135b64d3116af7b84caaa358dc72e2e60f4c8753fa54fa65b76ba35 --format json Feb 20 02:52:19 localhost python3[52448]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:52:21 localhost ansible-async_wrapper.py[52620]: Invoked with 188063216325 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573940.7842624-83497-38246200609178/AnsiballZ_command.py _ Feb 20 02:52:21 localhost ansible-async_wrapper.py[52623]: Starting module and watcher Feb 20 02:52:21 localhost ansible-async_wrapper.py[52623]: Start watching 52624 (3600) Feb 20 02:52:21 localhost ansible-async_wrapper.py[52624]: Start module (52624) Feb 20 02:52:21 localhost ansible-async_wrapper.py[52620]: Return async_wrapper task started. Feb 20 02:52:21 localhost python3[52644]: ansible-ansible.legacy.async_status Invoked with jid=188063216325.52620 mode=status _async_dir=/tmp/.ansible_async Feb 20 02:52:25 localhost puppet-user[52628]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 02:52:25 localhost puppet-user[52628]: (file: /etc/puppet/hiera.yaml) Feb 20 02:52:25 localhost puppet-user[52628]: Warning: Undefined variable '::deploy_config_name'; Feb 20 02:52:25 localhost puppet-user[52628]: (file & line not available) Feb 20 02:52:25 localhost puppet-user[52628]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 02:52:25 localhost puppet-user[52628]: (file & line not available) Feb 20 02:52:25 localhost puppet-user[52628]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 20 02:52:25 localhost puppet-user[52628]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 20 02:52:25 localhost puppet-user[52628]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.16 seconds Feb 20 02:52:25 localhost puppet-user[52628]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully Feb 20 02:52:25 localhost puppet-user[52628]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created Feb 20 02:52:25 localhost puppet-user[52628]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully Feb 20 02:52:25 localhost puppet-user[52628]: Notice: Applied catalog in 0.10 seconds Feb 20 02:52:25 localhost puppet-user[52628]: Application: Feb 20 02:52:25 localhost puppet-user[52628]: Initial environment: production Feb 20 02:52:25 localhost puppet-user[52628]: Converged environment: production Feb 20 02:52:25 localhost puppet-user[52628]: Run mode: user Feb 20 02:52:25 localhost puppet-user[52628]: Changes: Feb 20 02:52:25 localhost puppet-user[52628]: Total: 3 Feb 20 02:52:25 localhost puppet-user[52628]: Events: Feb 20 02:52:25 localhost puppet-user[52628]: Success: 3 Feb 20 02:52:25 localhost puppet-user[52628]: Total: 3 Feb 20 02:52:25 localhost puppet-user[52628]: Resources: Feb 20 02:52:25 localhost puppet-user[52628]: Changed: 3 Feb 20 02:52:25 localhost puppet-user[52628]: Out of sync: 3 Feb 20 02:52:25 localhost puppet-user[52628]: Total: 10 Feb 20 02:52:25 localhost puppet-user[52628]: Time: Feb 20 02:52:25 localhost puppet-user[52628]: Filebucket: 0.00 Feb 20 02:52:25 localhost puppet-user[52628]: Schedule: 0.00 Feb 20 02:52:25 localhost puppet-user[52628]: File: 0.00 Feb 20 02:52:25 localhost puppet-user[52628]: Exec: 0.01 Feb 20 02:52:25 localhost puppet-user[52628]: Augeas: 0.06 Feb 20 02:52:25 localhost puppet-user[52628]: Transaction evaluation: 0.09 Feb 20 02:52:25 localhost puppet-user[52628]: Catalog application: 0.10 Feb 20 02:52:25 localhost puppet-user[52628]: Config retrieval: 0.21 Feb 20 02:52:25 localhost puppet-user[52628]: Last run: 1771573945 Feb 20 02:52:25 localhost puppet-user[52628]: Total: 0.10 Feb 20 02:52:25 localhost puppet-user[52628]: Version: Feb 20 02:52:25 localhost puppet-user[52628]: Config: 1771573945 Feb 20 02:52:25 localhost puppet-user[52628]: Puppet: 7.10.0 Feb 20 02:52:25 localhost ansible-async_wrapper.py[52624]: Module complete (52624) Feb 20 02:52:26 localhost ansible-async_wrapper.py[52623]: Done in kid B. Feb 20 02:52:32 localhost python3[52771]: ansible-ansible.legacy.async_status Invoked with jid=188063216325.52620 mode=status _async_dir=/tmp/.ansible_async Feb 20 02:52:32 localhost python3[52787]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 02:52:33 localhost python3[52803]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:52:33 localhost python3[52851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:52:34 localhost python3[52894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573953.4333942-83875-251386423875295/source _original_basename=tmpejklz34t follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 02:52:34 localhost sshd[52925]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:52:34 localhost python3[52924]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:52:36 localhost python3[53073]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 20 02:52:36 localhost python3[53092]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 02:52:37 localhost python3[53108]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005625202 step=1 update_config_hash_only=False Feb 20 02:52:37 localhost python3[53124]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:52:38 localhost sshd[53129]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:52:38 localhost python3[53141]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 20 02:52:39 localhost python3[53158]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Feb 20 02:52:39 localhost python3[53200]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False Feb 20 02:52:40 localhost podman[53355]: 2026-02-20 07:52:40.280574803 +0000 UTC m=+0.081331368 container create 8a9ad986c241526fad2cf345c81d310f4e05afc2bd33d72b9414fbc10d387176 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, container_name=container-puppet-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Feb 20 02:52:40 localhost podman[53392]: 2026-02-20 07:52:40.312939717 +0000 UTC m=+0.072420580 container create d7a2a217e25b0f33eb5242180313bc2582ac679366075b508b167037632646be (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.5, config_id=tripleo_puppet_step1, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, batch=17.1_20260112.1, version=17.1.13, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, container_name=container-puppet-collectd) Feb 20 02:52:40 localhost systemd[1]: Started libpod-conmon-8a9ad986c241526fad2cf345c81d310f4e05afc2bd33d72b9414fbc10d387176.scope. Feb 20 02:52:40 localhost systemd[1]: Started libpod-conmon-d7a2a217e25b0f33eb5242180313bc2582ac679366075b508b167037632646be.scope. Feb 20 02:52:40 localhost systemd[1]: Started libcrun container. Feb 20 02:52:40 localhost systemd[1]: Started libcrun container. Feb 20 02:52:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/510b2333d3b26b9741758bde44c866cfc115ef5ee7ed5f374c439f31cb23c6ee/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff) Feb 20 02:52:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/510b2333d3b26b9741758bde44c866cfc115ef5ee7ed5f374c439f31cb23c6ee/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 20 02:52:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb7c653b8391bad6fee73ed98f3b53e3f877ae1d16c587da33487041fbee72e/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 20 02:52:40 localhost podman[53355]: 2026-02-20 07:52:40.246255151 +0000 UTC m=+0.047011726 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 20 02:52:40 localhost podman[53415]: 2026-02-20 07:52:40.352518628 +0000 UTC m=+0.079114001 container create 09eddf8d2a3327275e08715543eb8d8dcd1ffb1d0607e508c05475611cf6cfba (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, container_name=container-puppet-nova_libvirt, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20260112.1) Feb 20 02:52:40 localhost podman[53392]: 2026-02-20 07:52:40.283309015 +0000 UTC m=+0.042789888 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 20 02:52:40 localhost systemd[1]: Started libpod-conmon-09eddf8d2a3327275e08715543eb8d8dcd1ffb1d0607e508c05475611cf6cfba.scope. Feb 20 02:52:40 localhost podman[53416]: 2026-02-20 07:52:40.390521671 +0000 UTC m=+0.111977189 container create 2680c422ce90902583040fafb278bd8fbf1419b41b0e0b23406eefd5f8133c27 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, release=1766032510, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, container_name=container-puppet-metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 02:52:40 localhost podman[53355]: 2026-02-20 07:52:40.403079068 +0000 UTC m=+0.203835643 container init 8a9ad986c241526fad2cf345c81d310f4e05afc2bd33d72b9414fbc10d387176 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, config_id=tripleo_puppet_step1, com.redhat.component=openstack-iscsid-container, architecture=x86_64, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, container_name=container-puppet-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 02:52:40 localhost podman[53415]: 2026-02-20 07:52:40.315719101 +0000 UTC m=+0.042314484 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 02:52:40 localhost systemd[1]: Started libcrun container. Feb 20 02:52:40 localhost systemd[1]: Started libpod-conmon-2680c422ce90902583040fafb278bd8fbf1419b41b0e0b23406eefd5f8133c27.scope. Feb 20 02:52:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23efe7e57a27c4f0ce8bc52bd63242119e7b6e66fedfb0710e6330683936d6b4/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 20 02:52:40 localhost podman[53355]: 2026-02-20 07:52:40.434245296 +0000 UTC m=+0.235001861 container start 8a9ad986c241526fad2cf345c81d310f4e05afc2bd33d72b9414fbc10d387176 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.buildah.version=1.41.5, container_name=container-puppet-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 02:52:40 localhost podman[53355]: 2026-02-20 07:52:40.434452972 +0000 UTC m=+0.235209577 container attach 8a9ad986c241526fad2cf345c81d310f4e05afc2bd33d72b9414fbc10d387176 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, container_name=container-puppet-iscsid, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, release=1766032510, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, distribution-scope=public, build-date=2026-01-12T22:34:43Z) Feb 20 02:52:40 localhost podman[53415]: 2026-02-20 07:52:40.44036551 +0000 UTC m=+0.166960893 container init 09eddf8d2a3327275e08715543eb8d8dcd1ffb1d0607e508c05475611cf6cfba (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, container_name=container-puppet-nova_libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 02:52:40 localhost podman[53415]: 2026-02-20 07:52:40.447298999 +0000 UTC m=+0.173894372 container start 09eddf8d2a3327275e08715543eb8d8dcd1ffb1d0607e508c05475611cf6cfba (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_puppet_step1, url=https://www.redhat.com, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:31:49Z, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc.) Feb 20 02:52:40 localhost podman[53415]: 2026-02-20 07:52:40.447508275 +0000 UTC m=+0.174103668 container attach 09eddf8d2a3327275e08715543eb8d8dcd1ffb1d0607e508c05475611cf6cfba (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, container_name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 20 02:52:40 localhost systemd[1]: Started libcrun container. Feb 20 02:52:40 localhost podman[53414]: 2026-02-20 07:52:40.453564187 +0000 UTC m=+0.183894292 container create da937c122ae9ec974296da72995889983984b71a5bb38194e619c17ee7d51352 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, vcs-type=git, container_name=container-puppet-crond, io.openshift.expose-services=, vendor=Red Hat, Inc.) Feb 20 02:52:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c636c55771df3938b7f1637ba1e09d65433942af38d5ac503d6cf3fb2c1e8f/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 20 02:52:40 localhost podman[53416]: 2026-02-20 07:52:40.365179049 +0000 UTC m=+0.086634587 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 20 02:52:40 localhost podman[53414]: 2026-02-20 07:52:40.371935972 +0000 UTC m=+0.102266097 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 20 02:52:41 localhost systemd[1]: Started libpod-conmon-da937c122ae9ec974296da72995889983984b71a5bb38194e619c17ee7d51352.scope. Feb 20 02:52:41 localhost systemd[1]: Started libcrun container. Feb 20 02:52:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ba41a88f6fbd1db9cf9dc9638be96d2c3f12ce29d9389a5988c114e992de06/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 20 02:52:41 localhost podman[53392]: 2026-02-20 07:52:41.362447878 +0000 UTC m=+1.121928771 container init d7a2a217e25b0f33eb5242180313bc2582ac679366075b508b167037632646be (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13, build-date=2026-01-12T22:10:15Z, config_id=tripleo_puppet_step1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, container_name=container-puppet-collectd, tcib_managed=true) Feb 20 02:52:41 localhost systemd[1]: tmp-crun.JJL3uI.mount: Deactivated successfully. Feb 20 02:52:41 localhost podman[53416]: 2026-02-20 07:52:41.37182675 +0000 UTC m=+1.093282268 container init 2680c422ce90902583040fafb278bd8fbf1419b41b0e0b23406eefd5f8133c27 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, config_id=tripleo_puppet_step1, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, container_name=container-puppet-metrics_qdr, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 20 02:52:41 localhost podman[53392]: 2026-02-20 07:52:41.376039456 +0000 UTC m=+1.135520319 container start d7a2a217e25b0f33eb5242180313bc2582ac679366075b508b167037632646be (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Feb 20 02:52:41 localhost podman[53392]: 2026-02-20 07:52:41.378427878 +0000 UTC m=+1.137908821 container attach d7a2a217e25b0f33eb5242180313bc2582ac679366075b508b167037632646be (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_puppet_step1, container_name=container-puppet-collectd, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, release=1766032510, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, architecture=x86_64, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 02:52:41 localhost systemd[1]: tmp-crun.GOYpfU.mount: Deactivated successfully. Feb 20 02:52:41 localhost podman[53416]: 2026-02-20 07:52:41.389279275 +0000 UTC m=+1.110734823 container start 2680c422ce90902583040fafb278bd8fbf1419b41b0e0b23406eefd5f8133c27 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, config_id=tripleo_puppet_step1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-metrics_qdr, distribution-scope=public, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 02:52:41 localhost podman[53416]: 2026-02-20 07:52:41.389748668 +0000 UTC m=+1.111204276 container attach 2680c422ce90902583040fafb278bd8fbf1419b41b0e0b23406eefd5f8133c27 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, container_name=container-puppet-metrics_qdr, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 20 02:52:41 localhost podman[53414]: 2026-02-20 07:52:41.400483372 +0000 UTC m=+1.130813517 container init da937c122ae9ec974296da72995889983984b71a5bb38194e619c17ee7d51352 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., container_name=container-puppet-crond, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public) Feb 20 02:52:41 localhost podman[53414]: 2026-02-20 07:52:41.411811232 +0000 UTC m=+1.142141377 container start da937c122ae9ec974296da72995889983984b71a5bb38194e619c17ee7d51352 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-cron-container, config_id=tripleo_puppet_step1, version=17.1.13, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vcs-type=git, container_name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 20 02:52:41 localhost podman[53414]: 2026-02-20 07:52:41.412091521 +0000 UTC m=+1.142421666 container attach da937c122ae9ec974296da72995889983984b71a5bb38194e619c17ee7d51352 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, container_name=container-puppet-crond, config_id=tripleo_puppet_step1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team) Feb 20 02:52:43 localhost podman[53274]: 2026-02-20 07:52:40.151127509 +0000 UTC m=+0.040253991 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Feb 20 02:52:43 localhost ovs-vsctl[53712]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Feb 20 02:52:43 localhost puppet-user[53537]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 02:52:43 localhost puppet-user[53537]: (file: /etc/puppet/hiera.yaml) Feb 20 02:52:43 localhost puppet-user[53537]: Warning: Undefined variable '::deploy_config_name'; Feb 20 02:52:43 localhost puppet-user[53537]: (file & line not available) Feb 20 02:52:43 localhost puppet-user[53537]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 02:52:43 localhost puppet-user[53537]: (file & line not available) Feb 20 02:52:43 localhost podman[53781]: 2026-02-20 07:52:43.187763676 +0000 UTC m=+0.063800820 container create c4ea6ef284e8a50d14adb863f7d4026f24e8a16f7e7cc72b18d1c6503bcca49c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-central-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-central, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-type=git, container_name=container-puppet-ceilometer, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:24Z, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:24Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 02:52:43 localhost puppet-user[53537]: Notice: Accepting previously invalid value for target type 'Integer' Feb 20 02:52:43 localhost puppet-user[53506]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 02:52:43 localhost puppet-user[53506]: (file: /etc/puppet/hiera.yaml) Feb 20 02:52:43 localhost puppet-user[53506]: Warning: Undefined variable '::deploy_config_name'; Feb 20 02:52:43 localhost puppet-user[53506]: (file & line not available) Feb 20 02:52:43 localhost puppet-user[53505]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 02:52:43 localhost puppet-user[53505]: (file: /etc/puppet/hiera.yaml) Feb 20 02:52:43 localhost puppet-user[53505]: Warning: Undefined variable '::deploy_config_name'; Feb 20 02:52:43 localhost puppet-user[53505]: (file & line not available) Feb 20 02:52:43 localhost puppet-user[53547]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 02:52:43 localhost puppet-user[53547]: (file: /etc/puppet/hiera.yaml) Feb 20 02:52:43 localhost puppet-user[53537]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.11 seconds Feb 20 02:52:43 localhost puppet-user[53547]: Warning: Undefined variable '::deploy_config_name'; Feb 20 02:52:43 localhost puppet-user[53547]: (file & line not available) Feb 20 02:52:43 localhost systemd[1]: Started libpod-conmon-c4ea6ef284e8a50d14adb863f7d4026f24e8a16f7e7cc72b18d1c6503bcca49c.scope. Feb 20 02:52:43 localhost puppet-user[53506]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 02:52:43 localhost puppet-user[53506]: (file & line not available) Feb 20 02:52:43 localhost systemd[1]: Started libcrun container. Feb 20 02:52:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f694049235d5fdc28df409377b9c3a853a57405bcbd0a4d5b6ef2444b51ca4/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 20 02:52:43 localhost puppet-user[53505]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 02:52:43 localhost puppet-user[53505]: (file & line not available) Feb 20 02:52:43 localhost podman[53781]: 2026-02-20 07:52:43.252819243 +0000 UTC m=+0.128856387 container init c4ea6ef284e8a50d14adb863f7d4026f24e8a16f7e7cc72b18d1c6503bcca49c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=container-puppet-ceilometer, org.opencontainers.image.created=2026-01-12T23:07:24Z, release=1766032510, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-central-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.13, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:24Z, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-central, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 02:52:43 localhost puppet-user[53547]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 02:52:43 localhost puppet-user[53547]: (file & line not available) Feb 20 02:52:43 localhost puppet-user[53537]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root' Feb 20 02:52:43 localhost podman[53781]: 2026-02-20 07:52:43.156581318 +0000 UTC m=+0.032618482 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Feb 20 02:52:43 localhost puppet-user[53537]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root' Feb 20 02:52:43 localhost puppet-user[53537]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755' Feb 20 02:52:43 localhost podman[53781]: 2026-02-20 07:52:43.260227296 +0000 UTC m=+0.136264440 container start c4ea6ef284e8a50d14adb863f7d4026f24e8a16f7e7cc72b18d1c6503bcca49c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:24Z, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, container_name=container-puppet-ceilometer, release=1766032510, version=17.1.13, build-date=2026-01-12T23:07:24Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-central, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-central-container, name=rhosp-rhel9/openstack-ceilometer-central, tcib_managed=true) Feb 20 02:52:43 localhost podman[53781]: 2026-02-20 07:52:43.260416752 +0000 UTC m=+0.136453896 container attach c4ea6ef284e8a50d14adb863f7d4026f24e8a16f7e7cc72b18d1c6503bcca49c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, tcib_managed=true, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ceilometer-central-container, container_name=container-puppet-ceilometer, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, name=rhosp-rhel9/openstack-ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:24Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:24Z, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_puppet_step1, version=17.1.13, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Feb 20 02:52:43 localhost puppet-user[53537]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created Feb 20 02:52:43 localhost puppet-user[53537]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}c70f714c3fd7955f3fe902b4f2e47cf6277434b83af45bd0b1815db8f91156ca' Feb 20 02:52:43 localhost puppet-user[53537]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created Feb 20 02:52:43 localhost puppet-user[53537]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created Feb 20 02:52:43 localhost puppet-user[53535]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 02:52:43 localhost puppet-user[53535]: (file: /etc/puppet/hiera.yaml) Feb 20 02:52:43 localhost puppet-user[53535]: Warning: Undefined variable '::deploy_config_name'; Feb 20 02:52:43 localhost puppet-user[53535]: (file & line not available) Feb 20 02:52:43 localhost puppet-user[53547]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.08 seconds Feb 20 02:52:43 localhost puppet-user[53506]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.12 seconds Feb 20 02:52:43 localhost puppet-user[53537]: Notice: Applied catalog in 0.05 seconds Feb 20 02:52:43 localhost puppet-user[53537]: Application: Feb 20 02:52:43 localhost puppet-user[53537]: Initial environment: production Feb 20 02:52:43 localhost puppet-user[53537]: Converged environment: production Feb 20 02:52:43 localhost puppet-user[53537]: Run mode: user Feb 20 02:52:43 localhost puppet-user[53537]: Changes: Feb 20 02:52:43 localhost puppet-user[53537]: Total: 7 Feb 20 02:52:43 localhost puppet-user[53537]: Events: Feb 20 02:52:43 localhost puppet-user[53537]: Success: 7 Feb 20 02:52:43 localhost puppet-user[53537]: Total: 7 Feb 20 02:52:43 localhost puppet-user[53537]: Resources: Feb 20 02:52:43 localhost puppet-user[53537]: Skipped: 13 Feb 20 02:52:43 localhost puppet-user[53537]: Changed: 5 Feb 20 02:52:43 localhost puppet-user[53537]: Out of sync: 5 Feb 20 02:52:43 localhost puppet-user[53537]: Total: 20 Feb 20 02:52:43 localhost puppet-user[53537]: Time: Feb 20 02:52:43 localhost puppet-user[53537]: File: 0.01 Feb 20 02:52:43 localhost puppet-user[53537]: Transaction evaluation: 0.02 Feb 20 02:52:43 localhost puppet-user[53537]: Catalog application: 0.05 Feb 20 02:52:43 localhost puppet-user[53537]: Config retrieval: 0.14 Feb 20 02:52:43 localhost puppet-user[53537]: Last run: 1771573963 Feb 20 02:52:43 localhost puppet-user[53537]: Total: 0.05 Feb 20 02:52:43 localhost puppet-user[53537]: Version: Feb 20 02:52:43 localhost puppet-user[53537]: Config: 1771573963 Feb 20 02:52:43 localhost puppet-user[53537]: Puppet: 7.10.0 Feb 20 02:52:43 localhost puppet-user[53535]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 02:52:43 localhost puppet-user[53535]: (file & line not available) Feb 20 02:52:43 localhost puppet-user[53547]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0' Feb 20 02:52:43 localhost puppet-user[53547]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created Feb 20 02:52:43 localhost puppet-user[53547]: Notice: Applied catalog in 0.04 seconds Feb 20 02:52:43 localhost puppet-user[53547]: Application: Feb 20 02:52:43 localhost puppet-user[53547]: Initial environment: production Feb 20 02:52:43 localhost puppet-user[53547]: Converged environment: production Feb 20 02:52:43 localhost puppet-user[53547]: Run mode: user Feb 20 02:52:43 localhost puppet-user[53547]: Changes: Feb 20 02:52:43 localhost puppet-user[53547]: Total: 2 Feb 20 02:52:43 localhost puppet-user[53547]: Events: Feb 20 02:52:43 localhost puppet-user[53547]: Success: 2 Feb 20 02:52:43 localhost puppet-user[53547]: Total: 2 Feb 20 02:52:43 localhost puppet-user[53547]: Resources: Feb 20 02:52:43 localhost puppet-user[53547]: Changed: 2 Feb 20 02:52:43 localhost puppet-user[53547]: Out of sync: 2 Feb 20 02:52:43 localhost puppet-user[53547]: Skipped: 7 Feb 20 02:52:43 localhost puppet-user[53547]: Total: 9 Feb 20 02:52:43 localhost puppet-user[53547]: Time: Feb 20 02:52:43 localhost puppet-user[53547]: Cron: 0.01 Feb 20 02:52:43 localhost puppet-user[53547]: File: 0.02 Feb 20 02:52:43 localhost puppet-user[53547]: Transaction evaluation: 0.03 Feb 20 02:52:43 localhost puppet-user[53547]: Catalog application: 0.04 Feb 20 02:52:43 localhost puppet-user[53547]: Config retrieval: 0.10 Feb 20 02:52:43 localhost puppet-user[53547]: Last run: 1771573963 Feb 20 02:52:43 localhost puppet-user[53547]: Total: 0.04 Feb 20 02:52:43 localhost puppet-user[53547]: Version: Feb 20 02:52:43 localhost puppet-user[53547]: Config: 1771573963 Feb 20 02:52:43 localhost puppet-user[53547]: Puppet: 7.10.0 Feb 20 02:52:43 localhost puppet-user[53506]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully Feb 20 02:52:43 localhost puppet-user[53506]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created Feb 20 02:52:43 localhost puppet-user[53506]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully Feb 20 02:52:43 localhost puppet-user[53505]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \ Feb 20 02:52:43 localhost puppet-user[53505]: in a future release. Use nova::cinder::os_region_name instead Feb 20 02:52:43 localhost puppet-user[53505]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \ Feb 20 02:52:43 localhost puppet-user[53505]: in a future release. Use nova::cinder::catalog_info instead Feb 20 02:52:43 localhost puppet-user[53505]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41) Feb 20 02:52:43 localhost systemd[1]: libpod-2680c422ce90902583040fafb278bd8fbf1419b41b0e0b23406eefd5f8133c27.scope: Deactivated successfully. Feb 20 02:52:43 localhost systemd[1]: libpod-2680c422ce90902583040fafb278bd8fbf1419b41b0e0b23406eefd5f8133c27.scope: Consumed 2.074s CPU time. Feb 20 02:52:43 localhost podman[53416]: 2026-02-20 07:52:43.619342219 +0000 UTC m=+3.340797797 container died 2680c422ce90902583040fafb278bd8fbf1419b41b0e0b23406eefd5f8133c27 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, distribution-scope=public, version=17.1.13, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Feb 20 02:52:43 localhost puppet-user[53535]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.38 seconds Feb 20 02:52:43 localhost puppet-user[53505]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5) Feb 20 02:52:43 localhost puppet-user[53505]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5) Feb 20 02:52:43 localhost puppet-user[53505]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5) Feb 20 02:52:43 localhost systemd[1]: libpod-da937c122ae9ec974296da72995889983984b71a5bb38194e619c17ee7d51352.scope: Deactivated successfully. Feb 20 02:52:43 localhost systemd[1]: libpod-da937c122ae9ec974296da72995889983984b71a5bb38194e619c17ee7d51352.scope: Consumed 2.100s CPU time. Feb 20 02:52:43 localhost podman[53414]: 2026-02-20 07:52:43.649556518 +0000 UTC m=+3.379886633 container died da937c122ae9ec974296da72995889983984b71a5bb38194e619c17ee7d51352 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-crond, release=1766032510, batch=17.1_20260112.1, tcib_managed=true, config_id=tripleo_puppet_step1, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team) Feb 20 02:52:43 localhost puppet-user[53505]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated Feb 20 02:52:43 localhost systemd[1]: tmp-crun.APWQva.mount: Deactivated successfully. Feb 20 02:52:43 localhost puppet-user[53505]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set. Feb 20 02:52:43 localhost puppet-user[53505]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance Feb 20 02:52:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2680c422ce90902583040fafb278bd8fbf1419b41b0e0b23406eefd5f8133c27-userdata-shm.mount: Deactivated successfully. Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640' Feb 20 02:52:43 localhost podman[54086]: 2026-02-20 07:52:43.729960357 +0000 UTC m=+0.099306099 container cleanup 2680c422ce90902583040fafb278bd8fbf1419b41b0e0b23406eefd5f8133c27 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_puppet_step1, tcib_managed=true, version=17.1.13, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=container-puppet-metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 02:52:43 localhost podman[54101]: 2026-02-20 07:52:43.736645158 +0000 UTC m=+0.079960506 container cleanup da937c122ae9ec974296da72995889983984b71a5bb38194e619c17ee7d51352 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, tcib_managed=true, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, container_name=container-puppet-crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 02:52:43 localhost systemd[1]: libpod-conmon-2680c422ce90902583040fafb278bd8fbf1419b41b0e0b23406eefd5f8133c27.scope: Deactivated successfully. Feb 20 02:52:43 localhost systemd[1]: libpod-conmon-da937c122ae9ec974296da72995889983984b71a5bb38194e619c17ee7d51352.scope: Deactivated successfully. Feb 20 02:52:43 localhost python3[53200]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625202 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 20 02:52:43 localhost python3[53200]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625202 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 20 02:52:43 localhost puppet-user[53506]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully Feb 20 02:52:43 localhost puppet-user[53506]: Notice: Applied catalog in 0.44 seconds Feb 20 02:52:43 localhost puppet-user[53506]: Application: Feb 20 02:52:43 localhost puppet-user[53506]: Initial environment: production Feb 20 02:52:43 localhost puppet-user[53506]: Converged environment: production Feb 20 02:52:43 localhost puppet-user[53506]: Run mode: user Feb 20 02:52:43 localhost puppet-user[53506]: Changes: Feb 20 02:52:43 localhost puppet-user[53506]: Total: 4 Feb 20 02:52:43 localhost puppet-user[53506]: Events: Feb 20 02:52:43 localhost puppet-user[53506]: Success: 4 Feb 20 02:52:43 localhost puppet-user[53506]: Total: 4 Feb 20 02:52:43 localhost puppet-user[53506]: Resources: Feb 20 02:52:43 localhost puppet-user[53506]: Changed: 4 Feb 20 02:52:43 localhost puppet-user[53506]: Out of sync: 4 Feb 20 02:52:43 localhost puppet-user[53506]: Skipped: 8 Feb 20 02:52:43 localhost puppet-user[53506]: Total: 13 Feb 20 02:52:43 localhost puppet-user[53506]: Time: Feb 20 02:52:43 localhost puppet-user[53506]: File: 0.00 Feb 20 02:52:43 localhost puppet-user[53506]: Exec: 0.05 Feb 20 02:52:43 localhost puppet-user[53506]: Config retrieval: 0.16 Feb 20 02:52:43 localhost puppet-user[53506]: Augeas: 0.39 Feb 20 02:52:43 localhost puppet-user[53506]: Transaction evaluation: 0.44 Feb 20 02:52:43 localhost puppet-user[53506]: Catalog application: 0.44 Feb 20 02:52:43 localhost puppet-user[53506]: Last run: 1771573963 Feb 20 02:52:43 localhost puppet-user[53506]: Total: 0.44 Feb 20 02:52:43 localhost puppet-user[53506]: Version: Feb 20 02:52:43 localhost puppet-user[53506]: Config: 1771573963 Feb 20 02:52:43 localhost puppet-user[53506]: Puppet: 7.10.0 Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af' Feb 20 02:52:43 localhost puppet-user[53505]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used. Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}dee3f10cb1ff461ac3f1e743a5ef3f06993398c6c829895de1dae7f242a64b39' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62' Feb 20 02:52:43 localhost puppet-user[53535]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed Feb 20 02:52:43 localhost puppet-user[53535]: Notice: Applied catalog in 0.26 seconds Feb 20 02:52:43 localhost puppet-user[53535]: Application: Feb 20 02:52:43 localhost puppet-user[53535]: Initial environment: production Feb 20 02:52:43 localhost puppet-user[53535]: Converged environment: production Feb 20 02:52:43 localhost puppet-user[53535]: Run mode: user Feb 20 02:52:43 localhost puppet-user[53535]: Changes: Feb 20 02:52:43 localhost puppet-user[53535]: Total: 43 Feb 20 02:52:43 localhost puppet-user[53535]: Events: Feb 20 02:52:43 localhost puppet-user[53535]: Success: 43 Feb 20 02:52:43 localhost puppet-user[53535]: Total: 43 Feb 20 02:52:43 localhost puppet-user[53535]: Resources: Feb 20 02:52:43 localhost puppet-user[53535]: Skipped: 14 Feb 20 02:52:43 localhost puppet-user[53535]: Changed: 38 Feb 20 02:52:43 localhost puppet-user[53535]: Out of sync: 38 Feb 20 02:52:43 localhost puppet-user[53535]: Total: 82 Feb 20 02:52:43 localhost puppet-user[53535]: Time: Feb 20 02:52:43 localhost puppet-user[53535]: Concat file: 0.00 Feb 20 02:52:43 localhost puppet-user[53535]: File: 0.12 Feb 20 02:52:43 localhost puppet-user[53535]: Transaction evaluation: 0.25 Feb 20 02:52:43 localhost puppet-user[53535]: Catalog application: 0.26 Feb 20 02:52:43 localhost puppet-user[53535]: Config retrieval: 0.45 Feb 20 02:52:43 localhost puppet-user[53535]: Last run: 1771573963 Feb 20 02:52:43 localhost puppet-user[53535]: Concat fragment: 0.00 Feb 20 02:52:43 localhost puppet-user[53535]: Total: 0.26 Feb 20 02:52:43 localhost puppet-user[53535]: Version: Feb 20 02:52:43 localhost puppet-user[53535]: Config: 1771573963 Feb 20 02:52:43 localhost puppet-user[53535]: Puppet: 7.10.0 Feb 20 02:52:44 localhost systemd[1]: libpod-8a9ad986c241526fad2cf345c81d310f4e05afc2bd33d72b9414fbc10d387176.scope: Deactivated successfully. Feb 20 02:52:44 localhost systemd[1]: libpod-8a9ad986c241526fad2cf345c81d310f4e05afc2bd33d72b9414fbc10d387176.scope: Consumed 2.540s CPU time. Feb 20 02:52:44 localhost podman[53355]: 2026-02-20 07:52:44.098559145 +0000 UTC m=+3.899315730 container died 8a9ad986c241526fad2cf345c81d310f4e05afc2bd33d72b9414fbc10d387176 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, version=17.1.13, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=container-puppet-iscsid, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 20 02:52:44 localhost podman[54238]: 2026-02-20 07:52:44.115443672 +0000 UTC m=+0.062292654 container create b7ea7f1fc00597a71867591c79b1d16649d59863dfe8432162c50d1509590cc7 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, com.redhat.component=openstack-rsyslog-container, build-date=2026-01-12T22:10:09Z, url=https://www.redhat.com, config_id=tripleo_puppet_step1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=container-puppet-rsyslog, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 02:52:44 localhost systemd[1]: Started libpod-conmon-b7ea7f1fc00597a71867591c79b1d16649d59863dfe8432162c50d1509590cc7.scope. Feb 20 02:52:44 localhost systemd[1]: Started libcrun container. Feb 20 02:52:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a01411a662cc18c483e7d36cf3e51a5ac62cc254f0cb5553632b0450604eba50/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 20 02:52:44 localhost podman[54278]: 2026-02-20 07:52:44.168825578 +0000 UTC m=+0.064170161 container cleanup 8a9ad986c241526fad2cf345c81d310f4e05afc2bd33d72b9414fbc10d387176 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, container_name=container-puppet-iscsid, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, config_id=tripleo_puppet_step1, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 02:52:44 localhost systemd[1]: libpod-conmon-8a9ad986c241526fad2cf345c81d310f4e05afc2bd33d72b9414fbc10d387176.scope: Deactivated successfully. Feb 20 02:52:44 localhost python3[53200]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625202 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 20 02:52:44 localhost podman[54238]: 2026-02-20 07:52:44.087600465 +0000 UTC m=+0.034449457 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 20 02:52:44 localhost podman[54238]: 2026-02-20 07:52:44.248522426 +0000 UTC m=+0.195371438 container init b7ea7f1fc00597a71867591c79b1d16649d59863dfe8432162c50d1509590cc7 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp-rhel9/openstack-rsyslog, container_name=container-puppet-rsyslog, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-rsyslog-container, release=1766032510, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.created=2026-01-12T22:10:09Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=) Feb 20 02:52:44 localhost podman[54238]: 2026-02-20 07:52:44.263666601 +0000 UTC m=+0.210515613 container start b7ea7f1fc00597a71867591c79b1d16649d59863dfe8432162c50d1509590cc7 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:09Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-rsyslog, release=1766032510, build-date=2026-01-12T22:10:09Z, container_name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 rsyslog) Feb 20 02:52:44 localhost podman[54238]: 2026-02-20 07:52:44.26396017 +0000 UTC m=+0.210809192 container attach b7ea7f1fc00597a71867591c79b1d16649d59863dfe8432162c50d1509590cc7 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, release=1766032510, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_puppet_step1, build-date=2026-01-12T22:10:09Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:09Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, tcib_managed=true, url=https://www.redhat.com, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-rsyslog) Feb 20 02:52:44 localhost podman[54288]: 2026-02-20 07:52:44.16721807 +0000 UTC m=+0.037450728 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 20 02:52:44 localhost podman[53392]: 2026-02-20 07:52:44.286787118 +0000 UTC m=+4.046267981 container died d7a2a217e25b0f33eb5242180313bc2582ac679366075b508b167037632646be (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=container-puppet-collectd, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 02:52:44 localhost systemd[1]: var-lib-containers-storage-overlay-c5ba41a88f6fbd1db9cf9dc9638be96d2c3f12ce29d9389a5988c114e992de06-merged.mount: Deactivated successfully. Feb 20 02:52:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-da937c122ae9ec974296da72995889983984b71a5bb38194e619c17ee7d51352-userdata-shm.mount: Deactivated successfully. Feb 20 02:52:44 localhost systemd[1]: var-lib-containers-storage-overlay-f0c636c55771df3938b7f1637ba1e09d65433942af38d5ac503d6cf3fb2c1e8f-merged.mount: Deactivated successfully. Feb 20 02:52:44 localhost systemd[1]: var-lib-containers-storage-overlay-510b2333d3b26b9741758bde44c866cfc115ef5ee7ed5f374c439f31cb23c6ee-merged.mount: Deactivated successfully. Feb 20 02:52:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a9ad986c241526fad2cf345c81d310f4e05afc2bd33d72b9414fbc10d387176-userdata-shm.mount: Deactivated successfully. Feb 20 02:52:44 localhost systemd[1]: libpod-d7a2a217e25b0f33eb5242180313bc2582ac679366075b508b167037632646be.scope: Deactivated successfully. Feb 20 02:52:44 localhost systemd[1]: libpod-d7a2a217e25b0f33eb5242180313bc2582ac679366075b508b167037632646be.scope: Consumed 2.676s CPU time. Feb 20 02:52:44 localhost podman[54288]: 2026-02-20 07:52:44.309208582 +0000 UTC m=+0.179441250 container create a4f2dcfb5085be4ff09007345d487b3769ac2c3559d0c3a596ea6ec0e7b33438 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.41.5, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, config_id=tripleo_puppet_step1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Feb 20 02:52:44 localhost systemd[1]: Started libpod-conmon-a4f2dcfb5085be4ff09007345d487b3769ac2c3559d0c3a596ea6ec0e7b33438.scope. Feb 20 02:52:44 localhost systemd[1]: Started libcrun container. Feb 20 02:52:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3741b46d9214fdedf7f62129d14d13429d2d344cb39a1f8b2983025ef50d25/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 20 02:52:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3741b46d9214fdedf7f62129d14d13429d2d344cb39a1f8b2983025ef50d25/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff) Feb 20 02:52:44 localhost podman[54288]: 2026-02-20 07:52:44.367519625 +0000 UTC m=+0.237752263 container init a4f2dcfb5085be4ff09007345d487b3769ac2c3559d0c3a596ea6ec0e7b33438 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1766032510, vcs-type=git, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, container_name=container-puppet-ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 20 02:52:44 localhost podman[54288]: 2026-02-20 07:52:44.373200566 +0000 UTC m=+0.243433214 container start a4f2dcfb5085be4ff09007345d487b3769ac2c3559d0c3a596ea6ec0e7b33438 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_puppet_step1, container_name=container-puppet-ovn_controller, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5) Feb 20 02:52:44 localhost podman[54288]: 2026-02-20 07:52:44.373549627 +0000 UTC m=+0.243782265 container attach a4f2dcfb5085be4ff09007345d487b3769ac2c3559d0c3a596ea6ec0e7b33438 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, version=17.1.13, build-date=2026-01-12T22:36:40Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=container-puppet-ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 20 02:52:44 localhost podman[54360]: 2026-02-20 07:52:44.383529097 +0000 UTC m=+0.088762581 container cleanup d7a2a217e25b0f33eb5242180313bc2582ac679366075b508b167037632646be (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_puppet_step1, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=container-puppet-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 20 02:52:44 localhost systemd[1]: libpod-conmon-d7a2a217e25b0f33eb5242180313bc2582ac679366075b508b167037632646be.scope: Deactivated successfully. Feb 20 02:52:44 localhost python3[53200]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625202 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 20 02:52:44 localhost puppet-user[53505]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 1.26 seconds Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}37542e92f883a9129d79835364a7293bd4c337025ae650a647285cb3357f99b9' Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe' Feb 20 02:52:44 localhost puppet-user[53505]: Warning: Empty environment setting 'TLS_PASSWORD' Feb 20 02:52:44 localhost puppet-user[53505]: (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182) Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}5bbbbc79dd1f184aec3b40a4e5d830cb87a3dca9076a18726a5379ee062cd087' Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created Feb 20 02:52:44 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 02:52:45 localhost puppet-user[54041]: (file: /etc/puppet/hiera.yaml) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Undefined variable '::deploy_config_name'; Feb 20 02:52:45 localhost puppet-user[54041]: (file & line not available) Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created Feb 20 02:52:45 localhost systemd[1]: var-lib-containers-storage-overlay-6bb7c653b8391bad6fee73ed98f3b53e3f877ae1d16c587da33487041fbee72e-merged.mount: Deactivated successfully. Feb 20 02:52:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d7a2a217e25b0f33eb5242180313bc2582ac679366075b508b167037632646be-userdata-shm.mount: Deactivated successfully. Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 02:52:45 localhost puppet-user[54041]: (file & line not available) Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39) Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36) Feb 20 02:52:45 localhost puppet-user[54041]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26) Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.37 seconds Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created Feb 20 02:52:45 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created Feb 20 02:52:45 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created Feb 20 02:52:46 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created Feb 20 02:52:46 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created Feb 20 02:52:46 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created Feb 20 02:52:46 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created Feb 20 02:52:46 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created Feb 20 02:52:46 localhost puppet-user[54358]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 02:52:46 localhost puppet-user[54358]: (file: /etc/puppet/hiera.yaml) Feb 20 02:52:46 localhost puppet-user[54358]: Warning: Undefined variable '::deploy_config_name'; Feb 20 02:52:46 localhost puppet-user[54358]: (file & line not available) Feb 20 02:52:46 localhost puppet-user[54041]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created Feb 20 02:52:46 localhost puppet-user[54358]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 02:52:46 localhost puppet-user[54358]: (file & line not available) Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created Feb 20 02:52:46 localhost puppet-user[54041]: Notice: Applied catalog in 0.42 seconds Feb 20 02:52:46 localhost puppet-user[54041]: Application: Feb 20 02:52:46 localhost puppet-user[54041]: Initial environment: production Feb 20 02:52:46 localhost puppet-user[54041]: Converged environment: production Feb 20 02:52:46 localhost puppet-user[54041]: Run mode: user Feb 20 02:52:46 localhost puppet-user[54041]: Changes: Feb 20 02:52:46 localhost puppet-user[54041]: Total: 31 Feb 20 02:52:46 localhost puppet-user[54041]: Events: Feb 20 02:52:46 localhost puppet-user[54041]: Success: 31 Feb 20 02:52:46 localhost puppet-user[54041]: Total: 31 Feb 20 02:52:46 localhost puppet-user[54041]: Resources: Feb 20 02:52:46 localhost puppet-user[54041]: Skipped: 22 Feb 20 02:52:46 localhost puppet-user[54041]: Changed: 31 Feb 20 02:52:46 localhost puppet-user[54041]: Out of sync: 31 Feb 20 02:52:46 localhost puppet-user[54041]: Total: 151 Feb 20 02:52:46 localhost puppet-user[54041]: Time: Feb 20 02:52:46 localhost puppet-user[54041]: Package: 0.02 Feb 20 02:52:46 localhost puppet-user[54041]: Ceilometer config: 0.34 Feb 20 02:52:46 localhost puppet-user[54041]: Transaction evaluation: 0.42 Feb 20 02:52:46 localhost puppet-user[54041]: Catalog application: 0.42 Feb 20 02:52:46 localhost puppet-user[54041]: Config retrieval: 0.44 Feb 20 02:52:46 localhost puppet-user[54041]: Last run: 1771573966 Feb 20 02:52:46 localhost puppet-user[54041]: Resources: 0.00 Feb 20 02:52:46 localhost puppet-user[54041]: Total: 0.42 Feb 20 02:52:46 localhost puppet-user[54041]: Version: Feb 20 02:52:46 localhost puppet-user[54041]: Config: 1771573965 Feb 20 02:52:46 localhost puppet-user[54041]: Puppet: 7.10.0 Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created Feb 20 02:52:46 localhost puppet-user[54397]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 02:52:46 localhost puppet-user[54397]: (file: /etc/puppet/hiera.yaml) Feb 20 02:52:46 localhost puppet-user[54397]: Warning: Undefined variable '::deploy_config_name'; Feb 20 02:52:46 localhost puppet-user[54397]: (file & line not available) Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created Feb 20 02:52:46 localhost puppet-user[54397]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 02:52:46 localhost puppet-user[54397]: (file & line not available) Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created Feb 20 02:52:46 localhost puppet-user[54358]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.23 seconds Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created Feb 20 02:52:46 localhost puppet-user[54358]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2' Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}64c5f9c37bfdcd550f09aea32895662c8b3e80da678034168cc6138d9da68080' Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created Feb 20 02:52:46 localhost puppet-user[54358]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b' Feb 20 02:52:46 localhost puppet-user[54358]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}173ceb57f9d5ce238d085ca60f32df22cd731f4e1124beb2d4310f850f8cb20e' Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created Feb 20 02:52:46 localhost puppet-user[54358]: Notice: Applied catalog in 0.11 seconds Feb 20 02:52:46 localhost puppet-user[54358]: Application: Feb 20 02:52:46 localhost puppet-user[54358]: Initial environment: production Feb 20 02:52:46 localhost puppet-user[54358]: Converged environment: production Feb 20 02:52:46 localhost puppet-user[54358]: Run mode: user Feb 20 02:52:46 localhost puppet-user[54358]: Changes: Feb 20 02:52:46 localhost puppet-user[54358]: Total: 3 Feb 20 02:52:46 localhost puppet-user[54358]: Events: Feb 20 02:52:46 localhost puppet-user[54358]: Success: 3 Feb 20 02:52:46 localhost puppet-user[54358]: Total: 3 Feb 20 02:52:46 localhost puppet-user[54358]: Resources: Feb 20 02:52:46 localhost puppet-user[54358]: Skipped: 11 Feb 20 02:52:46 localhost puppet-user[54358]: Changed: 3 Feb 20 02:52:46 localhost puppet-user[54358]: Out of sync: 3 Feb 20 02:52:46 localhost puppet-user[54358]: Total: 25 Feb 20 02:52:46 localhost puppet-user[54358]: Time: Feb 20 02:52:46 localhost puppet-user[54358]: Concat file: 0.00 Feb 20 02:52:46 localhost puppet-user[54358]: Concat fragment: 0.00 Feb 20 02:52:46 localhost puppet-user[54358]: File: 0.01 Feb 20 02:52:46 localhost puppet-user[54358]: Transaction evaluation: 0.11 Feb 20 02:52:46 localhost puppet-user[54358]: Catalog application: 0.11 Feb 20 02:52:46 localhost puppet-user[54358]: Config retrieval: 0.28 Feb 20 02:52:46 localhost puppet-user[54358]: Last run: 1771573966 Feb 20 02:52:46 localhost puppet-user[54358]: Total: 0.11 Feb 20 02:52:46 localhost puppet-user[54358]: Version: Feb 20 02:52:46 localhost puppet-user[54358]: Config: 1771573966 Feb 20 02:52:46 localhost puppet-user[54358]: Puppet: 7.10.0 Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created Feb 20 02:52:46 localhost puppet-user[54397]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.26 seconds Feb 20 02:52:46 localhost systemd[1]: libpod-c4ea6ef284e8a50d14adb863f7d4026f24e8a16f7e7cc72b18d1c6503bcca49c.scope: Deactivated successfully. Feb 20 02:52:46 localhost systemd[1]: libpod-c4ea6ef284e8a50d14adb863f7d4026f24e8a16f7e7cc72b18d1c6503bcca49c.scope: Consumed 2.914s CPU time. Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created Feb 20 02:52:46 localhost ovs-vsctl[54732]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642 Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created Feb 20 02:52:46 localhost podman[54714]: 2026-02-20 07:52:46.56567119 +0000 UTC m=+0.084228955 container died c4ea6ef284e8a50d14adb863f7d4026f24e8a16f7e7cc72b18d1c6503bcca49c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, name=rhosp-rhel9/openstack-ceilometer-central, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:24Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, build-date=2026-01-12T23:07:24Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-central-container, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=container-puppet-ceilometer, tcib_managed=true, config_id=tripleo_puppet_step1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.13, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public) Feb 20 02:52:46 localhost ovs-vsctl[54735]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created Feb 20 02:52:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4ea6ef284e8a50d14adb863f7d4026f24e8a16f7e7cc72b18d1c6503bcca49c-userdata-shm.mount: Deactivated successfully. Feb 20 02:52:46 localhost systemd[1]: var-lib-containers-storage-overlay-02f694049235d5fdc28df409377b9c3a853a57405bcbd0a4d5b6ef2444b51ca4-merged.mount: Deactivated successfully. Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created Feb 20 02:52:46 localhost ovs-vsctl[54740]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.106 Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created Feb 20 02:52:46 localhost podman[54714]: 2026-02-20 07:52:46.647532332 +0000 UTC m=+0.166090077 container cleanup c4ea6ef284e8a50d14adb863f7d4026f24e8a16f7e7cc72b18d1c6503bcca49c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-central-container, url=https://www.redhat.com, distribution-scope=public, container_name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:24Z, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, vcs-type=git, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-central, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T23:07:24Z) Feb 20 02:52:46 localhost systemd[1]: libpod-conmon-c4ea6ef284e8a50d14adb863f7d4026f24e8a16f7e7cc72b18d1c6503bcca49c.scope: Deactivated successfully. Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created Feb 20 02:52:46 localhost python3[53200]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625202 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Feb 20 02:52:46 localhost ovs-vsctl[54743]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005625202.localdomain Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005625202.novalocal' to 'np0005625202.localdomain' Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created Feb 20 02:52:46 localhost ovs-vsctl[54761]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created Feb 20 02:52:46 localhost ovs-vsctl[54772]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000 Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created Feb 20 02:52:46 localhost ovs-vsctl[54776]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60 Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created Feb 20 02:52:46 localhost ovs-vsctl[54790]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created Feb 20 02:52:46 localhost ovs-vsctl[54793]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000 Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created Feb 20 02:52:46 localhost systemd[1]: libpod-b7ea7f1fc00597a71867591c79b1d16649d59863dfe8432162c50d1509590cc7.scope: Deactivated successfully. Feb 20 02:52:46 localhost systemd[1]: libpod-b7ea7f1fc00597a71867591c79b1d16649d59863dfe8432162c50d1509590cc7.scope: Consumed 2.351s CPU time. Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created Feb 20 02:52:46 localhost podman[54238]: 2026-02-20 07:52:46.809280648 +0000 UTC m=+2.756129670 container died b7ea7f1fc00597a71867591c79b1d16649d59863dfe8432162c50d1509590cc7 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-rsyslog, config_id=tripleo_puppet_step1, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.created=2026-01-12T22:10:09Z, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git) Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created Feb 20 02:52:46 localhost ovs-vsctl[54802]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0 Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created Feb 20 02:52:46 localhost ovs-vsctl[54809]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:b3:da:16 Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created Feb 20 02:52:46 localhost ovs-vsctl[54816]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created Feb 20 02:52:46 localhost ovs-vsctl[54823]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created Feb 20 02:52:46 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created Feb 20 02:52:46 localhost ovs-vsctl[54826]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0 Feb 20 02:52:46 localhost puppet-user[54397]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created Feb 20 02:52:46 localhost podman[54801]: 2026-02-20 07:52:46.928871146 +0000 UTC m=+0.106164575 container cleanup b7ea7f1fc00597a71867591c79b1d16649d59863dfe8432162c50d1509590cc7 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, version=17.1.13, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=container-puppet-rsyslog, config_id=tripleo_puppet_step1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp-rhel9/openstack-rsyslog, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Feb 20 02:52:46 localhost systemd[1]: libpod-conmon-b7ea7f1fc00597a71867591c79b1d16649d59863dfe8432162c50d1509590cc7.scope: Deactivated successfully. Feb 20 02:52:46 localhost python3[53200]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625202 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 20 02:52:46 localhost puppet-user[54397]: Notice: Applied catalog in 0.49 seconds Feb 20 02:52:46 localhost puppet-user[54397]: Application: Feb 20 02:52:46 localhost puppet-user[54397]: Initial environment: production Feb 20 02:52:46 localhost puppet-user[54397]: Converged environment: production Feb 20 02:52:46 localhost puppet-user[54397]: Run mode: user Feb 20 02:52:46 localhost puppet-user[54397]: Changes: Feb 20 02:52:46 localhost puppet-user[54397]: Total: 14 Feb 20 02:52:46 localhost puppet-user[54397]: Events: Feb 20 02:52:46 localhost puppet-user[54397]: Success: 14 Feb 20 02:52:46 localhost puppet-user[54397]: Total: 14 Feb 20 02:52:46 localhost puppet-user[54397]: Resources: Feb 20 02:52:46 localhost puppet-user[54397]: Skipped: 12 Feb 20 02:52:46 localhost puppet-user[54397]: Changed: 14 Feb 20 02:52:46 localhost puppet-user[54397]: Out of sync: 14 Feb 20 02:52:46 localhost puppet-user[54397]: Total: 29 Feb 20 02:52:46 localhost puppet-user[54397]: Time: Feb 20 02:52:46 localhost puppet-user[54397]: Exec: 0.02 Feb 20 02:52:46 localhost puppet-user[54397]: Config retrieval: 0.29 Feb 20 02:52:46 localhost puppet-user[54397]: Vs config: 0.40 Feb 20 02:52:46 localhost puppet-user[54397]: Transaction evaluation: 0.49 Feb 20 02:52:46 localhost puppet-user[54397]: Catalog application: 0.49 Feb 20 02:52:46 localhost puppet-user[54397]: Last run: 1771573966 Feb 20 02:52:46 localhost puppet-user[54397]: Total: 0.49 Feb 20 02:52:46 localhost puppet-user[54397]: Version: Feb 20 02:52:46 localhost puppet-user[54397]: Config: 1771573966 Feb 20 02:52:46 localhost puppet-user[54397]: Puppet: 7.10.0 Feb 20 02:52:47 localhost systemd[1]: libpod-a4f2dcfb5085be4ff09007345d487b3769ac2c3559d0c3a596ea6ec0e7b33438.scope: Deactivated successfully. Feb 20 02:52:47 localhost systemd[1]: libpod-a4f2dcfb5085be4ff09007345d487b3769ac2c3559d0c3a596ea6ec0e7b33438.scope: Consumed 2.839s CPU time. Feb 20 02:52:47 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully Feb 20 02:52:47 localhost podman[54288]: 2026-02-20 07:52:47.38942761 +0000 UTC m=+3.259660258 container died a4f2dcfb5085be4ff09007345d487b3769ac2c3559d0c3a596ea6ec0e7b33438 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, release=1766032510, config_id=tripleo_puppet_step1, batch=17.1_20260112.1, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=container-puppet-ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container) Feb 20 02:52:47 localhost systemd[1]: var-lib-containers-storage-overlay-a01411a662cc18c483e7d36cf3e51a5ac62cc254f0cb5553632b0450604eba50-merged.mount: Deactivated successfully. Feb 20 02:52:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7ea7f1fc00597a71867591c79b1d16649d59863dfe8432162c50d1509590cc7-userdata-shm.mount: Deactivated successfully. Feb 20 02:52:47 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully Feb 20 02:52:47 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created Feb 20 02:52:47 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created Feb 20 02:52:47 localhost systemd[1]: tmp-crun.lejFRM.mount: Deactivated successfully. Feb 20 02:52:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a4f2dcfb5085be4ff09007345d487b3769ac2c3559d0c3a596ea6ec0e7b33438-userdata-shm.mount: Deactivated successfully. Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Feb 20 02:52:48 localhost podman[54902]: 2026-02-20 07:52:48.511133292 +0000 UTC m=+1.115281430 container cleanup a4f2dcfb5085be4ff09007345d487b3769ac2c3559d0c3a596ea6ec0e7b33438 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, container_name=container-puppet-ovn_controller, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1) Feb 20 02:52:48 localhost python3[53200]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625202 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 20 02:52:48 localhost systemd[1]: libpod-conmon-a4f2dcfb5085be4ff09007345d487b3769ac2c3559d0c3a596ea6ec0e7b33438.scope: Deactivated successfully. Feb 20 02:52:48 localhost podman[54435]: 2026-02-20 07:52:44.63361063 +0000 UTC m=+0.031622782 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Feb 20 02:52:48 localhost systemd[1]: var-lib-containers-storage-overlay-9f3741b46d9214fdedf7f62129d14d13429d2d344cb39a1f8b2983025ef50d25-merged.mount: Deactivated successfully. Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created Feb 20 02:52:48 localhost podman[54966]: 2026-02-20 07:52:48.767560477 +0000 UTC m=+0.092929667 container create 94e27dc4de5b77d45fdbe287a5ebee93771ede5c638a2e8c4a616f4f369e880c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, org.opencontainers.image.created=2026-01-12T22:57:35Z, version=17.1.13, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-server, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=container-puppet-neutron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-neutron-server-container, release=1766032510, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2026-01-12T22:57:35Z, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 20 02:52:48 localhost systemd[1]: Started libpod-conmon-94e27dc4de5b77d45fdbe287a5ebee93771ede5c638a2e8c4a616f4f369e880c.scope. Feb 20 02:52:48 localhost podman[54966]: 2026-02-20 07:52:48.718446279 +0000 UTC m=+0.043815489 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Feb 20 02:52:48 localhost systemd[1]: Started libcrun container. Feb 20 02:52:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ab390a091d2afa33c2755ebdc1277500dcd38cf20687fcf657e54262f4a111/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Feb 20 02:52:48 localhost podman[54966]: 2026-02-20 07:52:48.842375007 +0000 UTC m=+0.167744187 container init 94e27dc4de5b77d45fdbe287a5ebee93771ede5c638a2e8c4a616f4f369e880c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T22:57:35Z, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-neutron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, batch=17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:57:35Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public) Feb 20 02:52:48 localhost podman[54966]: 2026-02-20 07:52:48.853301136 +0000 UTC m=+0.178670326 container start 94e27dc4de5b77d45fdbe287a5ebee93771ede5c638a2e8c4a616f4f369e880c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-neutron-server-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:57:35Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, container_name=container-puppet-neutron, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp-rhel9/openstack-neutron-server, org.opencontainers.image.created=2026-01-12T22:57:35Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, url=https://www.redhat.com, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 02:52:48 localhost podman[54966]: 2026-02-20 07:52:48.853629746 +0000 UTC m=+0.178999006 container attach 94e27dc4de5b77d45fdbe287a5ebee93771ede5c638a2e8c4a616f4f369e880c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:57:35Z, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, container_name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-server-container, release=1766032510, distribution-scope=public, build-date=2026-01-12T22:57:35Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-server) Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created Feb 20 02:52:48 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created Feb 20 02:52:49 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created Feb 20 02:52:49 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created Feb 20 02:52:49 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created Feb 20 02:52:49 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created Feb 20 02:52:49 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created Feb 20 02:52:49 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created Feb 20 02:52:49 localhost puppet-user[53505]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created Feb 20 02:52:49 localhost puppet-user[53505]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}3ccd56cc76ec60fa08fd698d282c9c89b1e8c485a00f47d57569ed8f6f8a16e4' Feb 20 02:52:49 localhost puppet-user[53505]: Notice: Applied catalog in 4.44 seconds Feb 20 02:52:49 localhost puppet-user[53505]: Application: Feb 20 02:52:49 localhost puppet-user[53505]: Initial environment: production Feb 20 02:52:49 localhost puppet-user[53505]: Converged environment: production Feb 20 02:52:49 localhost puppet-user[53505]: Run mode: user Feb 20 02:52:49 localhost puppet-user[53505]: Changes: Feb 20 02:52:49 localhost puppet-user[53505]: Total: 183 Feb 20 02:52:49 localhost puppet-user[53505]: Events: Feb 20 02:52:49 localhost puppet-user[53505]: Success: 183 Feb 20 02:52:49 localhost puppet-user[53505]: Total: 183 Feb 20 02:52:49 localhost puppet-user[53505]: Resources: Feb 20 02:52:49 localhost puppet-user[53505]: Changed: 183 Feb 20 02:52:49 localhost puppet-user[53505]: Out of sync: 183 Feb 20 02:52:49 localhost puppet-user[53505]: Skipped: 57 Feb 20 02:52:49 localhost puppet-user[53505]: Total: 487 Feb 20 02:52:49 localhost puppet-user[53505]: Time: Feb 20 02:52:49 localhost puppet-user[53505]: Concat file: 0.00 Feb 20 02:52:49 localhost puppet-user[53505]: Concat fragment: 0.00 Feb 20 02:52:49 localhost puppet-user[53505]: Anchor: 0.00 Feb 20 02:52:49 localhost puppet-user[53505]: File line: 0.00 Feb 20 02:52:49 localhost puppet-user[53505]: Virtlogd config: 0.00 Feb 20 02:52:49 localhost puppet-user[53505]: Virtstoraged config: 0.01 Feb 20 02:52:49 localhost puppet-user[53505]: Virtnodedevd config: 0.01 Feb 20 02:52:49 localhost puppet-user[53505]: Virtsecretd config: 0.02 Feb 20 02:52:49 localhost puppet-user[53505]: Virtqemud config: 0.02 Feb 20 02:52:49 localhost puppet-user[53505]: Exec: 0.02 Feb 20 02:52:49 localhost puppet-user[53505]: Package: 0.02 Feb 20 02:52:49 localhost puppet-user[53505]: File: 0.02 Feb 20 02:52:49 localhost puppet-user[53505]: Virtproxyd config: 0.04 Feb 20 02:52:49 localhost puppet-user[53505]: Augeas: 1.00 Feb 20 02:52:49 localhost puppet-user[53505]: Config retrieval: 1.50 Feb 20 02:52:49 localhost puppet-user[53505]: Last run: 1771573969 Feb 20 02:52:49 localhost puppet-user[53505]: Nova config: 3.09 Feb 20 02:52:49 localhost puppet-user[53505]: Transaction evaluation: 4.43 Feb 20 02:52:49 localhost puppet-user[53505]: Catalog application: 4.44 Feb 20 02:52:49 localhost puppet-user[53505]: Resources: 0.00 Feb 20 02:52:49 localhost puppet-user[53505]: Total: 4.44 Feb 20 02:52:49 localhost puppet-user[53505]: Version: Feb 20 02:52:49 localhost puppet-user[53505]: Config: 1771573963 Feb 20 02:52:49 localhost puppet-user[53505]: Puppet: 7.10.0 Feb 20 02:52:49 localhost systemd[1]: libpod-09eddf8d2a3327275e08715543eb8d8dcd1ffb1d0607e508c05475611cf6cfba.scope: Deactivated successfully. Feb 20 02:52:49 localhost systemd[1]: libpod-09eddf8d2a3327275e08715543eb8d8dcd1ffb1d0607e508c05475611cf6cfba.scope: Consumed 8.374s CPU time. Feb 20 02:52:50 localhost podman[53415]: 2026-02-20 07:52:50.014453615 +0000 UTC m=+9.741049028 container died 09eddf8d2a3327275e08715543eb8d8dcd1ffb1d0607e508c05475611cf6cfba (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_puppet_step1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, container_name=container-puppet-nova_libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, io.buildah.version=1.41.5, build-date=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-nova-libvirt, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 20 02:52:50 localhost systemd[1]: tmp-crun.Jg2JHX.mount: Deactivated successfully. Feb 20 02:52:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09eddf8d2a3327275e08715543eb8d8dcd1ffb1d0607e508c05475611cf6cfba-userdata-shm.mount: Deactivated successfully. Feb 20 02:52:50 localhost podman[55035]: 2026-02-20 07:52:50.194013507 +0000 UTC m=+0.167227562 container cleanup 09eddf8d2a3327275e08715543eb8d8dcd1ffb1d0607e508c05475611cf6cfba (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, name=rhosp-rhel9/openstack-nova-libvirt, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-libvirt-container, container_name=container-puppet-nova_libvirt, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, io.buildah.version=1.41.5, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Feb 20 02:52:50 localhost systemd[1]: libpod-conmon-09eddf8d2a3327275e08715543eb8d8dcd1ffb1d0607e508c05475611cf6cfba.scope: Deactivated successfully. Feb 20 02:52:50 localhost python3[53200]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625202 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 02:52:50 localhost systemd[1]: var-lib-containers-storage-overlay-23efe7e57a27c4f0ce8bc52bd63242119e7b6e66fedfb0710e6330683936d6b4-merged.mount: Deactivated successfully. Feb 20 02:52:50 localhost puppet-user[54997]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass Feb 20 02:52:50 localhost puppet-user[54997]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 02:52:50 localhost puppet-user[54997]: (file: /etc/puppet/hiera.yaml) Feb 20 02:52:50 localhost puppet-user[54997]: Warning: Undefined variable '::deploy_config_name'; Feb 20 02:52:50 localhost puppet-user[54997]: (file & line not available) Feb 20 02:52:50 localhost puppet-user[54997]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 02:52:50 localhost puppet-user[54997]: (file & line not available) Feb 20 02:52:50 localhost puppet-user[54997]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37) Feb 20 02:52:51 localhost puppet-user[54997]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.61 seconds Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created Feb 20 02:52:51 localhost puppet-user[54997]: Notice: Applied catalog in 0.45 seconds Feb 20 02:52:51 localhost puppet-user[54997]: Application: Feb 20 02:52:51 localhost puppet-user[54997]: Initial environment: production Feb 20 02:52:51 localhost puppet-user[54997]: Converged environment: production Feb 20 02:52:51 localhost puppet-user[54997]: Run mode: user Feb 20 02:52:51 localhost puppet-user[54997]: Changes: Feb 20 02:52:51 localhost puppet-user[54997]: Total: 33 Feb 20 02:52:51 localhost puppet-user[54997]: Events: Feb 20 02:52:51 localhost puppet-user[54997]: Success: 33 Feb 20 02:52:51 localhost puppet-user[54997]: Total: 33 Feb 20 02:52:51 localhost puppet-user[54997]: Resources: Feb 20 02:52:51 localhost puppet-user[54997]: Skipped: 21 Feb 20 02:52:51 localhost puppet-user[54997]: Changed: 33 Feb 20 02:52:51 localhost puppet-user[54997]: Out of sync: 33 Feb 20 02:52:51 localhost puppet-user[54997]: Total: 155 Feb 20 02:52:51 localhost puppet-user[54997]: Time: Feb 20 02:52:51 localhost puppet-user[54997]: Resources: 0.00 Feb 20 02:52:51 localhost puppet-user[54997]: Ovn metadata agent config: 0.02 Feb 20 02:52:51 localhost puppet-user[54997]: Neutron config: 0.36 Feb 20 02:52:51 localhost puppet-user[54997]: Transaction evaluation: 0.44 Feb 20 02:52:51 localhost puppet-user[54997]: Catalog application: 0.45 Feb 20 02:52:51 localhost puppet-user[54997]: Config retrieval: 0.69 Feb 20 02:52:51 localhost puppet-user[54997]: Last run: 1771573971 Feb 20 02:52:51 localhost puppet-user[54997]: Total: 0.45 Feb 20 02:52:51 localhost puppet-user[54997]: Version: Feb 20 02:52:51 localhost puppet-user[54997]: Config: 1771573970 Feb 20 02:52:51 localhost puppet-user[54997]: Puppet: 7.10.0 Feb 20 02:52:52 localhost systemd[1]: libpod-94e27dc4de5b77d45fdbe287a5ebee93771ede5c638a2e8c4a616f4f369e880c.scope: Deactivated successfully. Feb 20 02:52:52 localhost systemd[1]: libpod-94e27dc4de5b77d45fdbe287a5ebee93771ede5c638a2e8c4a616f4f369e880c.scope: Consumed 3.512s CPU time. Feb 20 02:52:52 localhost podman[54966]: 2026-02-20 07:52:52.38846194 +0000 UTC m=+3.713831110 container died 94e27dc4de5b77d45fdbe287a5ebee93771ede5c638a2e8c4a616f4f369e880c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:57:35Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-server-container, summary=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, version=17.1.13, name=rhosp-rhel9/openstack-neutron-server, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2026-01-12T22:57:35Z, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, container_name=container-puppet-neutron, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team) Feb 20 02:52:52 localhost systemd[1]: tmp-crun.6ap97I.mount: Deactivated successfully. Feb 20 02:52:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94e27dc4de5b77d45fdbe287a5ebee93771ede5c638a2e8c4a616f4f369e880c-userdata-shm.mount: Deactivated successfully. Feb 20 02:52:52 localhost systemd[1]: var-lib-containers-storage-overlay-29ab390a091d2afa33c2755ebdc1277500dcd38cf20687fcf657e54262f4a111-merged.mount: Deactivated successfully. Feb 20 02:52:52 localhost podman[55183]: 2026-02-20 07:52:52.527750379 +0000 UTC m=+0.128455255 container cleanup 94e27dc4de5b77d45fdbe287a5ebee93771ede5c638a2e8c4a616f4f369e880c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-server-container, name=rhosp-rhel9/openstack-neutron-server, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:57:35Z, tcib_managed=true, vcs-type=git, release=1766032510, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2026-01-12T22:57:35Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-server, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, version=17.1.13, container_name=container-puppet-neutron) Feb 20 02:52:52 localhost systemd[1]: libpod-conmon-94e27dc4de5b77d45fdbe287a5ebee93771ede5c638a2e8c4a616f4f369e880c.scope: Deactivated successfully. Feb 20 02:52:52 localhost python3[53200]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625202 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625202', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Feb 20 02:52:53 localhost python3[55235]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:52:54 localhost python3[55267]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:52:54 localhost python3[55317]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:52:55 localhost python3[55360]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573974.5729885-84402-25482842160415/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:52:55 localhost python3[55422]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:52:56 localhost python3[55465]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573975.4032044-84402-177632089092488/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:52:56 localhost python3[55527]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:52:56 localhost python3[55570]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573976.199054-84433-73923492353710/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:52:57 localhost python3[55632]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:52:57 localhost python3[55675]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573977.0950418-84451-237833148889266/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:52:58 localhost python3[55705]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:52:58 localhost systemd[1]: Reloading. Feb 20 02:52:58 localhost systemd-rc-local-generator[55728]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:52:58 localhost systemd-sysv-generator[55733]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:52:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:52:58 localhost systemd[1]: Reloading. Feb 20 02:52:58 localhost systemd-rc-local-generator[55766]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:52:58 localhost systemd-sysv-generator[55771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:52:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:52:58 localhost systemd[1]: Starting TripleO Container Shutdown... Feb 20 02:52:58 localhost systemd[1]: Finished TripleO Container Shutdown. Feb 20 02:52:59 localhost python3[55828]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:52:59 localhost python3[55871]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573978.9589393-84600-223866577735376/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:53:00 localhost python3[55933]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:53:00 localhost python3[55976]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573979.8969004-84632-23057297709574/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:53:01 localhost python3[56006]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:53:01 localhost systemd[1]: Reloading. Feb 20 02:53:01 localhost systemd-sysv-generator[56036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:53:01 localhost systemd-rc-local-generator[56032]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:53:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:53:01 localhost systemd[1]: Reloading. Feb 20 02:53:01 localhost systemd-rc-local-generator[56069]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:53:01 localhost systemd-sysv-generator[56073]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:53:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:53:01 localhost sshd[56082]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:53:01 localhost systemd[1]: Starting Create netns directory... Feb 20 02:53:01 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 20 02:53:01 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 20 02:53:01 localhost systemd[1]: Finished Create netns directory. Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: 5f5f3be2be3c6541e811126095b44bf3 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: da9a0dc7b40588672419e3ce10063e21 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: df79bec7915db2c2cb15f0a47bf8984d Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: ca9e756af36a4b8ed088db0b68d5c381 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: ca9e756af36a4b8ed088db0b68d5c381 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: ca9e756af36a4b8ed088db0b68d5c381 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: ca9e756af36a4b8ed088db0b68d5c381 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: ca9e756af36a4b8ed088db0b68d5c381 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: ca9e756af36a4b8ed088db0b68d5c381 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: ded727e639ed8db75a0b90424d424624 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 8cdce88e823976bbaa6aae3526d6d0ab Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 8cdce88e823976bbaa6aae3526d6d0ab Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: ca9e756af36a4b8ed088db0b68d5c381 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: ca9e756af36a4b8ed088db0b68d5c381 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: 85da22c155c014a1a90b143a817b4401 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381 Feb 20 02:53:02 localhost python3[56101]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: ca9e756af36a4b8ed088db0b68d5c381 Feb 20 02:53:03 localhost python3[56160]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 20 02:53:03 localhost podman[56199]: 2026-02-20 07:53:03.804126164 +0000 UTC m=+0.076775721 container create 174ccf8d85d2e74b04dcf29d36e100b7fe87a6ad40bbafe9764e125acad52d7b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, build-date=2026-01-12T22:10:14Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, container_name=metrics_qdr_init_logs, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 20 02:53:03 localhost systemd[1]: Started libpod-conmon-174ccf8d85d2e74b04dcf29d36e100b7fe87a6ad40bbafe9764e125acad52d7b.scope. Feb 20 02:53:03 localhost podman[56199]: 2026-02-20 07:53:03.760958145 +0000 UTC m=+0.033607712 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 20 02:53:03 localhost systemd[1]: Started libcrun container. Feb 20 02:53:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f30e89b0197b32323080d66a5c2339670a40cc53ac30b9a3c2a0a3585d5c84d/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Feb 20 02:53:03 localhost podman[56199]: 2026-02-20 07:53:03.879544012 +0000 UTC m=+0.152193559 container init 174ccf8d85d2e74b04dcf29d36e100b7fe87a6ad40bbafe9764e125acad52d7b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, container_name=metrics_qdr_init_logs, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 02:53:03 localhost podman[56199]: 2026-02-20 07:53:03.891407499 +0000 UTC m=+0.164057016 container start 174ccf8d85d2e74b04dcf29d36e100b7fe87a6ad40bbafe9764e125acad52d7b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.5, tcib_managed=true, architecture=x86_64, container_name=metrics_qdr_init_logs, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com) Feb 20 02:53:03 localhost podman[56199]: 2026-02-20 07:53:03.891571004 +0000 UTC m=+0.164220521 container attach 174ccf8d85d2e74b04dcf29d36e100b7fe87a6ad40bbafe9764e125acad52d7b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, build-date=2026-01-12T22:10:14Z, version=17.1.13, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.5, container_name=metrics_qdr_init_logs, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 02:53:03 localhost systemd[1]: libpod-174ccf8d85d2e74b04dcf29d36e100b7fe87a6ad40bbafe9764e125acad52d7b.scope: Deactivated successfully. Feb 20 02:53:03 localhost podman[56199]: 2026-02-20 07:53:03.897710249 +0000 UTC m=+0.170359796 container died 174ccf8d85d2e74b04dcf29d36e100b7fe87a6ad40bbafe9764e125acad52d7b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, container_name=metrics_qdr_init_logs, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, build-date=2026-01-12T22:10:14Z) Feb 20 02:53:03 localhost podman[56218]: 2026-02-20 07:53:03.996009015 +0000 UTC m=+0.085653378 container cleanup 174ccf8d85d2e74b04dcf29d36e100b7fe87a6ad40bbafe9764e125acad52d7b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step1, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, container_name=metrics_qdr_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 20 02:53:04 localhost systemd[1]: libpod-conmon-174ccf8d85d2e74b04dcf29d36e100b7fe87a6ad40bbafe9764e125acad52d7b.scope: Deactivated successfully. Feb 20 02:53:04 localhost python3[56160]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd Feb 20 02:53:04 localhost podman[56294]: 2026-02-20 07:53:04.447867518 +0000 UTC m=+0.089310788 container create 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, version=17.1.13, batch=17.1_20260112.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 02:53:04 localhost systemd[1]: Started libpod-conmon-6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.scope. Feb 20 02:53:04 localhost systemd[1]: Started libcrun container. Feb 20 02:53:04 localhost podman[56294]: 2026-02-20 07:53:04.402901085 +0000 UTC m=+0.044344385 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 20 02:53:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f91179ec110573ad291e73cde7d5952eb6aeb23b95367e17a51a5370c062b6c/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff) Feb 20 02:53:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f91179ec110573ad291e73cde7d5952eb6aeb23b95367e17a51a5370c062b6c/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Feb 20 02:53:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:53:04 localhost podman[56294]: 2026-02-20 07:53:04.538887486 +0000 UTC m=+0.180330796 container init 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 02:53:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:53:04 localhost podman[56294]: 2026-02-20 07:53:04.576550199 +0000 UTC m=+0.217993459 container start 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 02:53:04 localhost python3[56160]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5f5f3be2be3c6541e811126095b44bf3 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Feb 20 02:53:04 localhost podman[56316]: 2026-02-20 07:53:04.670858106 +0000 UTC m=+0.085975027 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, tcib_managed=true, distribution-scope=public, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 02:53:04 localhost systemd[1]: tmp-crun.rf4sTD.mount: Deactivated successfully. Feb 20 02:53:04 localhost systemd[1]: var-lib-containers-storage-overlay-0f30e89b0197b32323080d66a5c2339670a40cc53ac30b9a3c2a0a3585d5c84d-merged.mount: Deactivated successfully. Feb 20 02:53:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-174ccf8d85d2e74b04dcf29d36e100b7fe87a6ad40bbafe9764e125acad52d7b-userdata-shm.mount: Deactivated successfully. Feb 20 02:53:04 localhost podman[56316]: 2026-02-20 07:53:04.888856734 +0000 UTC m=+0.303973685 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, tcib_managed=true, release=1766032510, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 02:53:04 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:53:05 localhost python3[56388]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:53:05 localhost python3[56404]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:53:06 localhost python3[56465]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573985.6038668-84722-15874875093642/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:53:06 localhost python3[56481]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 02:53:06 localhost systemd[1]: Reloading. Feb 20 02:53:06 localhost systemd-sysv-generator[56513]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:53:06 localhost systemd-rc-local-generator[56509]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:53:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:53:07 localhost python3[56534]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:53:07 localhost systemd[1]: Reloading. Feb 20 02:53:07 localhost systemd-rc-local-generator[56563]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:53:07 localhost systemd-sysv-generator[56569]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:53:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:53:07 localhost systemd[1]: Starting metrics_qdr container... Feb 20 02:53:07 localhost systemd[1]: Started metrics_qdr container. Feb 20 02:53:08 localhost python3[56616]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:53:09 localhost python3[56737]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005625202 step=1 update_config_hash_only=False Feb 20 02:53:10 localhost python3[56753]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:53:10 localhost python3[56769]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 20 02:53:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:53:35 localhost podman[56770]: 2026-02-20 07:53:35.451481412 +0000 UTC m=+0.088669655 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, release=1766032510, container_name=metrics_qdr) Feb 20 02:53:35 localhost podman[56770]: 2026-02-20 07:53:35.647657119 +0000 UTC m=+0.284845292 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 02:53:35 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:53:44 localhost sshd[56799]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:54:06 localhost sshd[56878]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:54:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:54:06 localhost systemd[1]: tmp-crun.KJfYeR.mount: Deactivated successfully. Feb 20 02:54:06 localhost podman[56880]: 2026-02-20 07:54:06.289949198 +0000 UTC m=+0.076967976 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, release=1766032510, managed_by=tripleo_ansible, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, build-date=2026-01-12T22:10:14Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, version=17.1.13, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 02:54:06 localhost podman[56880]: 2026-02-20 07:54:06.500008856 +0000 UTC m=+0.287027594 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, container_name=metrics_qdr, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc.) Feb 20 02:54:06 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:54:07 localhost sshd[56911]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:54:12 localhost sshd[56913]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:54:20 localhost sshd[56915]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:54:23 localhost sshd[56917]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:54:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:54:37 localhost systemd[1]: tmp-crun.gKzfsr.mount: Deactivated successfully. Feb 20 02:54:37 localhost podman[56919]: 2026-02-20 07:54:37.454735845 +0000 UTC m=+0.099881219 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.openshift.expose-services=, version=17.1.13, tcib_managed=true, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 02:54:37 localhost podman[56919]: 2026-02-20 07:54:37.673332637 +0000 UTC m=+0.318477991 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, config_id=tripleo_step1, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 02:54:37 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:54:47 localhost sshd[56948]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:54:56 localhost sshd[57027]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:55:05 localhost sshd[57029]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:55:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:55:08 localhost podman[57031]: 2026-02-20 07:55:08.48256606 +0000 UTC m=+0.121381081 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5) Feb 20 02:55:08 localhost podman[57031]: 2026-02-20 07:55:08.673371636 +0000 UTC m=+0.312186667 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 02:55:08 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:55:11 localhost sshd[57060]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:55:11 localhost sshd[57061]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:55:15 localhost sshd[57064]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:55:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:55:39 localhost systemd[1]: tmp-crun.tqxwP1.mount: Deactivated successfully. Feb 20 02:55:39 localhost podman[57066]: 2026-02-20 07:55:39.441131618 +0000 UTC m=+0.083566852 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, version=17.1.13, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, container_name=metrics_qdr, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 20 02:55:39 localhost podman[57066]: 2026-02-20 07:55:39.644882164 +0000 UTC m=+0.287317428 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com) Feb 20 02:55:39 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:55:44 localhost sshd[57095]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:56:05 localhost sshd[57174]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:56:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:56:10 localhost systemd[1]: tmp-crun.eDNDrb.mount: Deactivated successfully. Feb 20 02:56:10 localhost podman[57176]: 2026-02-20 07:56:10.440094105 +0000 UTC m=+0.083527561 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5) Feb 20 02:56:10 localhost podman[57176]: 2026-02-20 07:56:10.624650554 +0000 UTC m=+0.268083930 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team) Feb 20 02:56:10 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:56:34 localhost sshd[57205]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:56:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:56:41 localhost podman[57208]: 2026-02-20 07:56:41.413613534 +0000 UTC m=+0.057274612 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=metrics_qdr) Feb 20 02:56:41 localhost podman[57208]: 2026-02-20 07:56:41.589944449 +0000 UTC m=+0.233605497 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13) Feb 20 02:56:41 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:57:07 localhost sshd[57313]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:57:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:57:12 localhost systemd[1]: tmp-crun.VU7n8S.mount: Deactivated successfully. Feb 20 02:57:12 localhost podman[57315]: 2026-02-20 07:57:12.441048568 +0000 UTC m=+0.081911075 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, batch=17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, container_name=metrics_qdr, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 02:57:12 localhost podman[57315]: 2026-02-20 07:57:12.65796505 +0000 UTC m=+0.298827507 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 02:57:12 localhost sshd[57342]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:57:12 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:57:35 localhost sshd[57344]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:57:41 localhost sshd[57346]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:57:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:57:43 localhost systemd[1]: tmp-crun.OIbAYb.mount: Deactivated successfully. Feb 20 02:57:43 localhost podman[57348]: 2026-02-20 07:57:43.448786765 +0000 UTC m=+0.085598307 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step1, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, tcib_managed=true, batch=17.1_20260112.1, vendor=Red Hat, Inc.) Feb 20 02:57:43 localhost podman[57348]: 2026-02-20 07:57:43.662723143 +0000 UTC m=+0.299534645 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, config_id=tripleo_step1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 02:57:43 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:58:04 localhost ceph-osd[32921]: osd.5 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [4,5,3] r=1 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:05 localhost sshd[57453]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:58:05 localhost ceph-osd[32921]: osd.5 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [5,4,0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:06 localhost ceph-osd[32921]: osd.5 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [5,4,0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:07 localhost ceph-osd[32921]: osd.5 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [3,4,5] r=2 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:08 localhost ceph-osd[31981]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2,3,4] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:10 localhost ceph-osd[31981]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2,3,4] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:58:14 localhost systemd[1]: tmp-crun.ChmVpn.mount: Deactivated successfully. Feb 20 02:58:14 localhost podman[57455]: 2026-02-20 07:58:14.450488173 +0000 UTC m=+0.089329891 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.openshift.expose-services=, version=17.1.13, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Feb 20 02:58:14 localhost podman[57455]: 2026-02-20 07:58:14.681731542 +0000 UTC m=+0.320573290 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 02:58:14 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:58:19 localhost ceph-osd[32921]: osd.5 pg_epoch: 31 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=10.842928886s) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active pruub 1118.861572266s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,0], acting [5,4,0] -> [5,4,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:19 localhost ceph-osd[32921]: osd.5 pg_epoch: 31 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=31 pruub=8.628171921s) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active pruub 1116.646850586s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,3], acting [4,5,3] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:19 localhost ceph-osd[32921]: osd.5 pg_epoch: 31 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=10.842928886s) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1118.861572266s@ mbc={}] state: transitioning to Primary Feb 20 02:58:19 localhost ceph-osd[32921]: osd.5 pg_epoch: 31 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=31 pruub=8.625298500s) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1116.646850586s@ mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.1f( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1e( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.1e( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1f( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.1d( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1c( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.1c( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1d( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1a( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.1a( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1b( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.1b( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.8( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.9( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.9( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.8( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.6( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.4( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.5( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.7( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.5( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.3( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.4( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.6( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.2( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.1( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.7( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.2( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.b( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.a( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.a( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.3( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.b( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.d( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.c( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.c( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.f( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.d( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.e( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.f( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.e( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.11( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.10( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.11( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.13( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.10( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.12( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.12( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.13( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.15( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.14( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.17( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.14( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.16( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.17( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.15( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.16( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.18( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.18( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.19( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[2.19( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=1 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.0( empty local-lis/les=31/32 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.19( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.14( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.11( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.10( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.13( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1c( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.2( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.4( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:20 localhost ceph-osd[32921]: osd.5 pg_epoch: 32 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:21 localhost ceph-osd[31981]: osd.2 pg_epoch: 33 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=12.285961151s) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active pruub 1126.377929688s@ mbc={}] start_peering_interval up [2,3,4] -> [2,3,4], acting [2,3,4] -> [2,3,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:21 localhost ceph-osd[31981]: osd.2 pg_epoch: 33 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=12.285961151s) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown pruub 1126.377929688s@ mbc={}] state: transitioning to Primary Feb 20 02:58:21 localhost ceph-osd[32921]: osd.5 pg_epoch: 33 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=9.727592468s) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active pruub 1119.788696289s@ mbc={}] start_peering_interval up [3,4,5] -> [3,4,5], acting [3,4,5] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:21 localhost ceph-osd[32921]: osd.5 pg_epoch: 33 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=9.722712517s) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1119.788696289s@ mbc={}] state: transitioning to Stray Feb 20 02:58:22 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.0 scrub starts Feb 20 02:58:22 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.0 scrub ok Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1c( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.19( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.16( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.17( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.f( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.e( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.d( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.2( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.3( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1b( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.4( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.b( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.18( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.5( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.6( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.8( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1a( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.7( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.a( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.9( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.c( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1f( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1d( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.15( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1e( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.10( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.11( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.13( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.12( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.14( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.1a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.18( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.1c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.1( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.3( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.2( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.4( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.1d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.7( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.19( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.6( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.5( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.1b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.8( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.9( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.16( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.14( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.15( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.17( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.12( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.13( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.10( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.11( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.1e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 34 pg[4.1f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.0( empty local-lis/les=33/34 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.10( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.13( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.17( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.8( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.6( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 34 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:25 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.0 scrub starts Feb 20 02:58:25 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.17 scrub starts Feb 20 02:58:25 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.17 scrub ok Feb 20 02:58:26 localhost ceph-osd[31981]: osd.2 pg_epoch: 35 pg[6.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [0,4,2] r=2 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:27 localhost ceph-osd[32921]: osd.5 pg_epoch: 36 pg[7.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1,5,3] r=1 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:28 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts Feb 20 02:58:28 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.1f scrub starts Feb 20 02:58:28 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok Feb 20 02:58:28 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.1f scrub ok Feb 20 02:58:29 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.10 scrub starts Feb 20 02:58:29 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.10 scrub ok Feb 20 02:58:31 localhost sshd[57483]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.1d( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,1,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,1,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639393806s) [4,2,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.540771484s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,3], acting [2,3,4] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639341354s) [4,2,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.540771484s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.19( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,3,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.638484955s) [2,4,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.540405273s@ mbc={}] start_peering_interval up [2,3,4] -> [2,4,3], acting [2,3,4] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.638484955s) [2,4,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.540405273s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,4,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.1c( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,3,4] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632340431s) [4,5,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.535644531s@ mbc={}] start_peering_interval up [2,3,4] -> [4,5,3], acting [2,3,4] -> [4,5,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632292747s) [4,5,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.535644531s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.637246132s) [3,1,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.540649414s@ mbc={}] start_peering_interval up [2,3,4] -> [3,1,5], acting [2,3,4] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633142471s) [0,1,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.536499023s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,2], acting [2,3,4] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633953094s) [3,4,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.537353516s@ mbc={}] start_peering_interval up [2,3,4] -> [3,4,2], acting [2,3,4] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.637202263s) [3,1,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.540649414s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633909225s) [3,4,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.537353516s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633090973s) [0,1,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.536499023s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634937286s) [1,5,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.538452148s@ mbc={}] start_peering_interval up [2,3,4] -> [1,5,0], acting [2,3,4] -> [1,5,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634903908s) [1,5,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.538452148s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633432388s) [0,2,4] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.537109375s@ mbc={}] start_peering_interval up [2,3,4] -> [0,2,4], acting [2,3,4] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633313179s) [0,2,4] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.537109375s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[3.8( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,0,4] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.6( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633376122s) [3,1,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.537841797s@ mbc={}] start_peering_interval up [2,3,4] -> [3,1,2], acting [2,3,4] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632895470s) [5,0,4] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.537475586s@ mbc={}] start_peering_interval up [2,3,4] -> [5,0,4], acting [2,3,4] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632811546s) [5,0,4] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.537475586s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633675575s) [0,4,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.538452148s@ mbc={}] start_peering_interval up [2,3,4] -> [0,4,5], acting [2,3,4] -> [0,4,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633621216s) [0,4,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.538452148s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.6( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633329391s) [3,1,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.537841797s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634760857s) [4,3,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.538330078s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.8( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632229805s) [2,0,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.537231445s@ mbc={}] start_peering_interval up [2,3,4] -> [2,0,1], acting [2,3,4] -> [2,0,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632698059s) [5,3,4] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.537719727s@ mbc={}] start_peering_interval up [2,3,4] -> [5,3,4], acting [2,3,4] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633406639s) [4,3,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.538330078s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.8( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632229805s) [2,0,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.537231445s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633517265s) [0,1,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.538574219s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,2], acting [2,3,4] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633475304s) [0,1,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.538574219s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633544922s) [1,0,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.538696289s@ mbc={}] start_peering_interval up [2,3,4] -> [1,0,2], acting [2,3,4] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633557320s) [4,0,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.538940430s@ mbc={}] start_peering_interval up [2,3,4] -> [4,0,2], acting [2,3,4] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.635879517s) [4,3,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.541259766s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633417130s) [1,0,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.538696289s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633514404s) [4,0,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.538940430s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.635842323s) [4,3,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.541259766s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632160187s) [2,4,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.537597656s@ mbc={}] start_peering_interval up [2,3,4] -> [2,4,0], acting [2,3,4] -> [2,4,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632160187s) [2,4,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.537597656s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632668495s) [5,3,4] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.537719727s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634716988s) [2,0,4] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.540283203s@ mbc={}] start_peering_interval up [2,3,4] -> [2,0,4], acting [2,3,4] -> [2,0,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633025169s) [4,2,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.538696289s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,3], acting [2,3,4] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.10( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629827499s) [4,5,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.535644531s@ mbc={}] start_peering_interval up [2,3,4] -> [4,5,0], acting [2,3,4] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632988930s) [4,2,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.538696289s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634716988s) [2,0,4] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.540283203s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.10( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629788399s) [4,5,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.535644531s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630021095s) [1,2,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.536010742s@ mbc={}] start_peering_interval up [2,3,4] -> [1,2,0], acting [2,3,4] -> [1,2,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629990578s) [1,2,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.536010742s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629731178s) [5,1,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.535766602s@ mbc={}] start_peering_interval up [2,3,4] -> [5,1,3], acting [2,3,4] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629696846s) [5,1,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.535766602s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630552292s) [3,2,4] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.536621094s@ mbc={}] start_peering_interval up [2,3,4] -> [3,2,4], acting [2,3,4] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630022049s) [5,0,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.536132812s@ mbc={}] start_peering_interval up [2,3,4] -> [5,0,1], acting [2,3,4] -> [5,0,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629956245s) [5,0,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.536132812s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630797386s) [1,3,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.537109375s@ mbc={}] start_peering_interval up [2,3,4] -> [1,3,2], acting [2,3,4] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.17( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630057335s) [3,5,4] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.536254883s@ mbc={}] start_peering_interval up [2,3,4] -> [3,5,4], acting [2,3,4] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630761147s) [1,3,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.537109375s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.17( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630022049s) [3,5,4] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.536254883s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630504608s) [4,3,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.536865234s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634807587s) [0,1,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.541259766s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,5], acting [2,3,4] -> [0,1,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634739876s) [4,2,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.541137695s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,0], acting [2,3,4] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630455017s) [4,3,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.536865234s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634769440s) [0,1,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.541259766s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630216599s) [3,2,4] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.536621094s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[5.1c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634654045s) [4,2,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.541137695s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,0,1] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.631090164s) [4,3,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.500244141s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1c( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.175134659s) [1,3,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.044433594s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,2], acting [5,4,0] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1d( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.178655624s) [2,4,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.047973633s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.631000519s) [4,3,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.500244141s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.637164116s) [4,3,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506591797s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1c( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.174960136s) [1,3,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.044433594s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.637008667s) [4,3,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.506591797s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1d( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.178379059s) [2,4,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.047973633s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1c( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177918434s) [2,1,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.047851562s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,0], acting [4,5,3] -> [2,1,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1c( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177843094s) [2,1,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.047851562s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.179451942s) [5,4,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049560547s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,3], acting [5,4,0] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629971504s) [4,3,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.500244141s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.179451942s) [5,4,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.049560547s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629929543s) [4,3,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.500244141s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1b( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177990913s) [5,4,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.048339844s@ mbc={}] start_peering_interval up [4,5,3] -> [5,4,3], acting [4,5,3] -> [5,4,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1b( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177990913s) [5,4,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.048339844s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.172390938s) [5,3,4] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.042846680s@ mbc={}] start_peering_interval up [5,4,0] -> [5,3,4], acting [5,4,0] -> [5,3,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.172390938s) [5,3,4] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.042846680s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.635990143s) [2,1,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506591797s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,3], acting [3,4,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.635948181s) [2,1,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.506591797s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.636547089s) [2,3,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506347656s@ mbc={}] start_peering_interval up [3,4,5] -> [2,3,1], acting [3,4,5] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1a( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177934647s) [1,5,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.048583984s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.172500610s) [5,4,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.043334961s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,3], acting [5,4,0] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1a( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177664757s) [1,5,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.048583984s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.172500610s) [5,4,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.043334961s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.3( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,4,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.2( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,1,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.1( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,1,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,3,1] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.c( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,0,1] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177695274s) [2,0,4] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.048950195s@ mbc={}] start_peering_interval up [5,4,0] -> [2,0,4], acting [5,4,0] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.178392410s) [5,1,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049682617s@ mbc={}] start_peering_interval up [5,4,0] -> [5,1,3], acting [5,4,0] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177645683s) [2,0,4] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.048950195s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.178392410s) [5,1,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.049682617s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629089355s) [2,3,4] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.500488281s@ mbc={}] start_peering_interval up [3,4,5] -> [2,3,4], acting [3,4,5] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[3.e( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,4,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,4,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629047394s) [2,3,4] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.500488281s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634733200s) [4,5,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506225586s@ mbc={}] start_peering_interval up [3,4,5] -> [4,5,0], acting [3,4,5] -> [4,5,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634703636s) [4,5,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.506225586s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.7( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177531242s) [4,2,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049194336s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,3], acting [4,5,3] -> [4,2,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.7( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177503586s) [4,2,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.049194336s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.8( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176065445s) [1,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.047973633s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634802818s) [2,1,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506835938s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,0], acting [3,4,5] -> [2,1,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633483887s) [2,4,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.505493164s@ mbc={}] start_peering_interval up [3,4,5] -> [2,4,3], acting [3,4,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.8( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.175918579s) [1,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.047973633s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634718895s) [2,1,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.506835938s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633433342s) [2,4,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.505493164s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176489830s) [4,3,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.048950195s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.5( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176769257s) [2,0,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.048583984s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176435471s) [4,3,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.048950195s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.5( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176204681s) [2,0,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.048583984s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633705139s) [2,1,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506225586s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,3], acting [3,4,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633451462s) [2,3,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.506347656s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633234024s) [2,1,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.506225586s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.175532341s) [4,0,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.048461914s@ mbc={}] start_peering_interval up [5,4,0] -> [4,0,5], acting [5,4,0] -> [4,0,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.175421715s) [4,0,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.048461914s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.2( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176209450s) [1,0,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049316406s@ mbc={}] start_peering_interval up [4,5,3] -> [1,0,2], acting [4,5,3] -> [1,0,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.2( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176166534s) [1,0,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.049316406s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633640289s) [5,3,4] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506958008s@ mbc={}] start_peering_interval up [3,4,5] -> [5,3,4], acting [3,4,5] -> [5,3,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.3( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176078796s) [4,3,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049438477s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,5], acting [4,5,3] -> [4,3,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633640289s) [5,3,4] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.506958008s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.3( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176037788s) [4,3,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.049438477s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633812904s) [1,5,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507324219s@ mbc={}] start_peering_interval up [3,4,5] -> [1,5,0], acting [3,4,5] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.10( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,0,4] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.4( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,3,4] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633769989s) [1,5,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.507324219s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,4,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[3.15( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,1,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.b( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.175168037s) [5,1,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049438477s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,0], acting [4,5,3] -> [5,1,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.174880981s) [4,3,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049194336s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632384300s) [4,3,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506469727s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,2], acting [3,4,5] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632072449s) [4,3,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.506469727s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632692337s) [4,2,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506958008s@ mbc={}] start_peering_interval up [3,4,5] -> [4,2,3], acting [3,4,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.b( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.175168037s) [5,1,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.049438477s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.174628258s) [4,3,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.049194336s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632417679s) [4,2,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.506958008s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.b( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,0,4] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.169508934s) [4,3,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.044433594s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.d( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.174779892s) [5,1,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.050048828s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.d( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.174779892s) [5,1,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.050048828s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.169435501s) [4,3,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.044433594s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.631642342s) [5,0,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506958008s@ mbc={}] start_peering_interval up [3,4,5] -> [5,0,1], acting [3,4,5] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.631642342s) [5,0,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.506958008s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.631656647s) [5,4,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507080078s@ mbc={}] start_peering_interval up [3,4,5] -> [5,4,3], acting [3,4,5] -> [5,4,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.631656647s) [5,4,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.507080078s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.11( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.174951553s) [4,3,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.050537109s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,2], acting [4,5,3] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.1f( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,4,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.11( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.174844742s) [4,3,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.050537109s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.13( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,0,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.12( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,1,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630122185s) [2,4,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507446289s@ mbc={}] start_peering_interval up [3,4,5] -> [2,4,3], acting [3,4,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.19( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.174015999s) [3,4,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.051513672s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,2], acting [4,5,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630079269s) [2,4,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.507446289s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.19( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.173976898s) [3,4,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.051513672s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.165202141s) [3,2,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.042846680s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,1], acting [5,4,0] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.a( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.172156334s) [2,3,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049804688s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.a( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.172124863s) [2,3,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.049804688s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.165167809s) [3,2,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.042846680s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.19( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.164972305s) [0,1,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.042846680s@ mbc={}] start_peering_interval up [5,4,0] -> [0,1,2], acting [5,4,0] -> [0,1,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.18( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.173432350s) [5,1,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.051391602s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629645348s) [0,4,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507690430s@ mbc={}] start_peering_interval up [3,4,5] -> [0,4,5], acting [3,4,5] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629495621s) [0,4,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.507690430s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.18( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.173432350s) [5,1,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.051391602s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.17( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.172518730s) [1,5,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.051025391s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629172325s) [3,4,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507690430s@ mbc={}] start_peering_interval up [3,4,5] -> [3,4,2], acting [3,4,5] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.17( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.172473907s) [1,5,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.051025391s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.164644241s) [1,3,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.043212891s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,5], acting [5,4,0] -> [1,3,5], acting_primary 5 -> 1, up_primary 5 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629117966s) [3,4,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.507690430s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.164603233s) [1,3,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.043212891s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629004478s) [3,2,4] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507690430s@ mbc={}] start_peering_interval up [3,4,5] -> [3,2,4], acting [3,4,5] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.19( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.164061546s) [0,1,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.042846680s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.16( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.171875954s) [1,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.051025391s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.16( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.171826363s) [1,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.051025391s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.628141403s) [4,2,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507446289s@ mbc={}] start_peering_interval up [3,4,5] -> [4,2,3], acting [3,4,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.628093719s) [4,2,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.507446289s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.628976822s) [3,2,4] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.507690430s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.14( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163416862s) [1,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.042968750s@ mbc={}] start_peering_interval up [5,4,0] -> [1,2,0], acting [5,4,0] -> [1,2,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.15( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.171243668s) [5,0,4] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.051025391s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,4], acting [4,5,3] -> [5,0,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.627851486s) [0,5,4] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507446289s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,4], acting [3,4,5] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.14( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163369179s) [1,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.042968750s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.15( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.171243668s) [5,0,4] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.051025391s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.627810478s) [0,5,4] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.507446289s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.14( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.170985222s) [4,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.050903320s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,0], acting [4,5,3] -> [4,2,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163430214s) [2,1,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.043334961s@ mbc={}] start_peering_interval up [5,4,0] -> [2,1,0], acting [5,4,0] -> [2,1,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163389206s) [2,1,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.043334961s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.14( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.170832634s) [4,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.050903320s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.13( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.170560837s) [2,4,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.050781250s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,3], acting [4,5,3] -> [2,4,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.627051353s) [5,3,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507324219s@ mbc={}] start_peering_interval up [3,4,5] -> [5,3,1], acting [3,4,5] -> [5,3,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.13( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.170528412s) [2,4,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.050781250s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.164093018s) [0,4,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.044433594s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.164048195s) [0,4,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.044433594s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.627051353s) [5,3,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.507324219s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.162190437s) [0,4,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.042846680s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.12( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.170155525s) [5,3,1] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.050903320s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,1], acting [4,5,3] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.162074089s) [0,4,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.042846680s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.12( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.170155525s) [5,3,1] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.050903320s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.13( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.162936211s) [1,3,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.043945312s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,2], acting [5,4,0] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.13( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.162897110s) [1,3,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.043945312s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.626185417s) [3,1,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507446289s@ mbc={}] start_peering_interval up [3,4,5] -> [3,1,5], acting [3,4,5] -> [3,1,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.625979424s) [3,1,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.507446289s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.10( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.162473679s) [1,5,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.044067383s@ mbc={}] start_peering_interval up [5,4,0] -> [1,5,3], acting [5,4,0] -> [1,5,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.10( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.162428856s) [1,5,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.044067383s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.10( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.169239044s) [2,0,4] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.051025391s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,4], acting [4,5,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.10( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.169205666s) [2,0,4] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.051025391s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.625086784s) [0,4,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507324219s@ mbc={}] start_peering_interval up [3,4,5] -> [0,4,5], acting [3,4,5] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.161973000s) [2,4,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.044311523s@ mbc={}] start_peering_interval up [5,4,0] -> [2,4,0], acting [5,4,0] -> [2,4,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.625027657s) [0,4,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.507324219s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.161926270s) [2,4,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.044311523s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.f( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.168183327s) [2,4,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.050537109s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.e( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.167766571s) [3,2,4] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.050292969s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.e( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.167733192s) [3,2,4] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.050292969s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.f( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.168026924s) [2,4,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.050537109s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.624518394s) [0,1,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507202148s@ mbc={}] start_peering_interval up [3,4,5] -> [0,1,5], acting [3,4,5] -> [0,1,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.624487877s) [0,1,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.507202148s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.161475182s) [1,5,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.044189453s@ mbc={}] start_peering_interval up [5,4,0] -> [1,5,0], acting [5,4,0] -> [1,5,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.623826981s) [1,0,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506713867s@ mbc={}] start_peering_interval up [3,4,5] -> [1,0,2], acting [3,4,5] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.623757362s) [1,0,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.506713867s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.161103249s) [1,5,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.044189453s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.161032677s) [1,2,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.044189453s@ mbc={}] start_peering_interval up [5,4,0] -> [1,2,3], acting [5,4,0] -> [1,2,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.c( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.166691780s) [2,0,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049926758s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.160994530s) [1,2,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.044189453s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.c( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.166638374s) [2,0,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.049926758s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.166305542s) [3,4,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049804688s@ mbc={}] start_peering_interval up [5,4,0] -> [3,4,5], acting [5,4,0] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.166256905s) [3,4,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.049804688s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.2( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.165176392s) [3,5,1] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.048706055s@ mbc={}] start_peering_interval up [5,4,0] -> [3,5,1], acting [5,4,0] -> [3,5,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.2( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.165131569s) [3,5,1] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.048706055s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.623002052s) [0,5,4] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506835938s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,4], acting [3,4,5] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.622957230s) [0,5,4] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.506835938s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.165734291s) [3,5,4] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049804688s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.165692329s) [3,5,4] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.049804688s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.6( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.164988518s) [3,2,4] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049194336s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.160461426s) [0,4,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.044799805s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,2], acting [5,4,0] -> [0,4,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.6( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.164953232s) [3,2,4] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.049194336s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.165947914s) [3,5,4] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.050292969s@ mbc={}] start_peering_interval up [5,4,0] -> [3,5,4], acting [5,4,0] -> [3,5,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.160392761s) [0,4,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.044799805s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.621543884s) [0,5,1] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.506347656s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,1], acting [3,4,5] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.165900230s) [3,5,4] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.050292969s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.621503830s) [0,5,1] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.506347656s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.622123718s) [5,0,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.507080078s@ mbc={}] start_peering_interval up [3,4,5] -> [5,0,1], acting [3,4,5] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.4( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163292885s) [3,2,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.048339844s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,1], acting [4,5,3] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.4( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163839340s) [3,2,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.048950195s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,1], acting [5,4,0] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.4( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163240433s) [3,2,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.048339844s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163403511s) [0,4,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.048950195s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163343430s) [0,4,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.048950195s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.614778519s) [3,2,4] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.500488281s@ mbc={}] start_peering_interval up [3,4,5] -> [3,2,4], acting [3,4,5] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.614606857s) [3,2,4] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.500488281s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.4( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163544655s) [3,2,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.048950195s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.9( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.161842346s) [3,4,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.047851562s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,5], acting [4,5,3] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.9( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.161798477s) [3,4,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.047851562s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.622123718s) [5,0,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.507080078s@ mbc={}] state: transitioning to Primary Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1e( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.162620544s) [3,5,4] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049072266s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1e( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.162555695s) [3,5,4] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.049072266s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163717270s) [3,2,4] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.050170898s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,4], acting [5,4,0] -> [3,2,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163206100s) [0,1,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.049682617s@ mbc={}] start_peering_interval up [5,4,0] -> [0,1,5], acting [5,4,0] -> [0,1,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163673401s) [3,2,4] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.050170898s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.163158417s) [0,1,5] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.049682617s@ mbc={}] state: transitioning to Stray Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1f( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.152386665s) [0,4,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1133.039184570s@ mbc={}] start_peering_interval up [4,5,3] -> [0,4,2], acting [4,5,3] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:58:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[2.1f( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.152334213s) [0,4,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.039184570s@ mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.11 scrub starts Feb 20 02:58:32 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.0 scrub starts Feb 20 02:58:32 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.11 scrub ok Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[3.19( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [0,1,2] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [3,4,2] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[3.18( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [3,2,1] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.19( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [0,1,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.d( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,2,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.1d( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [3,1,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.e( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [3,2,4] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[3.1( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [0,4,2] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[3.1c( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,3,2] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.5( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [0,4,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.1( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,3,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.7( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,3,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.9( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,5,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,0,2] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.17( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [3,5,4] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [0,4,2] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.15( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,3,5] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[3.1e( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [3,2,4] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.a( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,0,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[3.d( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,2,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.10( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,5,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 39 pg[5.1f( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,5,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [3,2,1] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[3.4( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [3,2,1] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [3,2,4] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [4,2,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.c( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,3,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.f( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [3,2,4] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,2,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.11( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [3,4,2] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,2,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.10( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [3,2,4] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[3.14( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,2,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[4.13( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,2,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.14( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [4,2,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[3.13( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,3,2] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [4,3,2] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[3.1d( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,4,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[3.1a( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,3,4] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[3.1b( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,4,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[3.9( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,1,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[4.6( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,3,4] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,3,4] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,0,4] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[2.1b( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,4,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[2.d( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,1,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[4.8( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,4,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[2.b( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,1,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[4.9( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,0,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[2.12( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,3,1] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[4.14( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,0,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,0,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,1,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[2.18( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,1,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[2.15( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,0,4] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[32921]: osd.5 pg_epoch: 40 pg[4.15( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,3,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[4.1f( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,4,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[4.1c( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,3,4] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,4,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,4,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,4,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[3.e( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,4,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[4.19( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,3,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[2.c( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,0,1] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,0,1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,1,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[4.2( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,1,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,0,1] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[4.3( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,4,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[4.1( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,1,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,3,1] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,4,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[3.8( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,0,4] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[3.15( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,1,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,4,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[4.1d( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,1,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,0,4] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:32 localhost ceph-osd[31981]: osd.2 pg_epoch: 40 pg[2.10( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,0,4] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:58:34 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.1a scrub starts Feb 20 02:58:37 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.18 deep-scrub starts Feb 20 02:58:38 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.1a scrub ok Feb 20 02:58:39 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts Feb 20 02:58:39 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.8 scrub starts Feb 20 02:58:39 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok Feb 20 02:58:39 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.8 scrub ok Feb 20 02:58:39 localhost sshd[57531]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:58:40 localhost sshd[57533]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:58:40 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.d scrub starts Feb 20 02:58:40 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.d scrub ok Feb 20 02:58:41 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.e scrub starts Feb 20 02:58:41 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.e scrub ok Feb 20 02:58:43 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 3.8 scrub starts Feb 20 02:58:43 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 3.8 scrub ok Feb 20 02:58:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:58:45 localhost podman[57535]: 2026-02-20 07:58:45.430114043 +0000 UTC m=+0.071108044 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, config_id=tripleo_step1, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 02:58:45 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 4.9 deep-scrub starts Feb 20 02:58:45 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 4.9 deep-scrub ok Feb 20 02:58:45 localhost podman[57535]: 2026-02-20 07:58:45.661733039 +0000 UTC m=+0.302727020 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 02:58:45 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:58:49 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 3.15 deep-scrub starts Feb 20 02:58:49 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.15 scrub starts Feb 20 02:58:49 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 3.15 deep-scrub ok Feb 20 02:58:50 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 4.15 scrub starts Feb 20 02:58:50 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 4.15 scrub ok Feb 20 02:58:52 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 3.e scrub starts Feb 20 02:58:52 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 3.e scrub ok Feb 20 02:58:53 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.1c scrub starts Feb 20 02:58:53 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.1c scrub ok Feb 20 02:58:54 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts Feb 20 02:58:54 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok Feb 20 02:58:58 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.12 scrub starts Feb 20 02:58:58 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.12 scrub ok Feb 20 02:58:59 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.1d scrub starts Feb 20 02:58:59 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.1d scrub ok Feb 20 02:58:59 localhost python3[57579]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:00 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts Feb 20 02:59:00 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok Feb 20 02:59:01 localhost python3[57595]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:02 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.19 scrub starts Feb 20 02:59:02 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.d deep-scrub starts Feb 20 02:59:02 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.19 scrub ok Feb 20 02:59:02 localhost sshd[57596]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:59:03 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.1f scrub starts Feb 20 02:59:03 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.1f scrub ok Feb 20 02:59:04 localhost python3[57613]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:05 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.b scrub starts Feb 20 02:59:05 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.b scrub ok Feb 20 02:59:06 localhost python3[57661]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:59:06 localhost python3[57704]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574345.8613355-92373-106982938286859/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=8e2004121a34320613d32710ae37702da8d027e6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:07 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.13 scrub starts Feb 20 02:59:07 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.13 scrub ok Feb 20 02:59:07 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 4.6 scrub starts Feb 20 02:59:07 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 4.6 scrub ok Feb 20 02:59:10 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts Feb 20 02:59:10 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok Feb 20 02:59:11 localhost python3[57766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:59:11 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.1 scrub starts Feb 20 02:59:11 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.1b scrub starts Feb 20 02:59:11 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.1 scrub ok Feb 20 02:59:11 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.1b scrub ok Feb 20 02:59:11 localhost python3[57809]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574351.0657835-92373-86835836728799/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=417007d20895a54571330144b727b714177f3d13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:12 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.10 deep-scrub starts Feb 20 02:59:12 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.10 deep-scrub ok Feb 20 02:59:12 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.1d scrub starts Feb 20 02:59:12 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.1d scrub ok Feb 20 02:59:13 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.1b scrub starts Feb 20 02:59:13 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.1b scrub ok Feb 20 02:59:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:59:16 localhost podman[57870]: 2026-02-20 07:59:16.441655636 +0000 UTC m=+0.081930369 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, batch=17.1_20260112.1, config_id=tripleo_step1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 20 02:59:16 localhost python3[57880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:59:16 localhost podman[57870]: 2026-02-20 07:59:16.638850722 +0000 UTC m=+0.279125455 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1) Feb 20 02:59:16 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:59:16 localhost python3[57942]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574356.2331393-92373-21437363011411/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=2a03ad5f1837679340274b70e67e768ad4c81335 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:17 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.c scrub starts Feb 20 02:59:17 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.c scrub ok Feb 20 02:59:17 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.9 scrub starts Feb 20 02:59:17 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 3.9 scrub ok Feb 20 02:59:18 localhost sshd[57957]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:59:18 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.2 deep-scrub starts Feb 20 02:59:18 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.2 deep-scrub ok Feb 20 02:59:18 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 4.14 scrub starts Feb 20 02:59:18 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 4.14 scrub ok Feb 20 02:59:19 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.5 scrub starts Feb 20 02:59:19 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.5 scrub ok Feb 20 02:59:20 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 5.12 deep-scrub starts Feb 20 02:59:20 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 5.12 deep-scrub ok Feb 20 02:59:22 localhost python3[58006]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:59:22 localhost python3[58051]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574362.3389113-92792-188789927948832/source _original_basename=tmpfii3mdxv follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:23 localhost ceph-osd[31981]: osd.2 pg_epoch: 43 pg[6.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=15.167323112s) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1190.889770508s@ mbc={}] start_peering_interval up [0,4,2] -> [0,4,2], acting [0,4,2] -> [0,4,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:23 localhost ceph-osd[31981]: osd.2 pg_epoch: 43 pg[6.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=15.164057732s) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.889770508s@ mbc={}] state: transitioning to Stray Feb 20 02:59:23 localhost ceph-osd[32921]: osd.5 pg_epoch: 43 pg[7.0( v 40'39 (0'0,40'39] local-lis/les=36/37 n=22 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=43 pruub=8.489914894s) [1,5,3] r=1 lpr=43 pi=[36,43)/1 luod=0'0 lua=38'37 crt=40'39 lcod 38'38 mlcod 0'0 active pruub 1180.189575195s@ mbc={}] start_peering_interval up [1,5,3] -> [1,5,3], acting [1,5,3] -> [1,5,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:23 localhost ceph-osd[32921]: osd.5 pg_epoch: 43 pg[7.0( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=43 pruub=8.487649918s) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 lcod 38'38 mlcod 0'0 unknown NOTIFY pruub 1180.189575195s@ mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost python3[58113]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.1a( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.7( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.1f( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.15( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.16( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.17( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.10( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.12( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.13( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.11( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.e( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.c( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.d( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.1b( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.2( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.f( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.1( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.18( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.3( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.8( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.6( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.5( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.b( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.19( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.a( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.b( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.8( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.9( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.a( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.1e( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.9( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.f( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.6( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.5( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.4( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.e( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.3( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.7( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.1( v 40'39 (0'0,40'39] local-lis/les=36/37 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.1d( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.2( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.d( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.4( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[32921]: osd.5 pg_epoch: 44 pg[7.c( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=1 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.1c( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost ceph-osd[31981]: osd.2 pg_epoch: 44 pg[6.14( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Feb 20 02:59:24 localhost python3[58156]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574363.7391233-92865-222776672300423/source _original_basename=tmpddmhu1si follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:24 localhost sshd[58157]: main: sshd: ssh-rsa algorithm is disabled Feb 20 02:59:25 localhost python3[58188]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None Feb 20 02:59:25 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts Feb 20 02:59:25 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok Feb 20 02:59:25 localhost python3[58206]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:59:27 localhost ansible-async_wrapper.py[58378]: Invoked with 509088309516 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574366.6776278-92968-104765458616431/AnsiballZ_command.py _ Feb 20 02:59:27 localhost ansible-async_wrapper.py[58381]: Starting module and watcher Feb 20 02:59:27 localhost ansible-async_wrapper.py[58381]: Start watching 58382 (3600) Feb 20 02:59:27 localhost ansible-async_wrapper.py[58382]: Start module (58382) Feb 20 02:59:27 localhost ansible-async_wrapper.py[58378]: Return async_wrapper task started. Feb 20 02:59:27 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 5.b scrub starts Feb 20 02:59:27 localhost python3[58402]: ansible-ansible.legacy.async_status Invoked with jid=509088309516.58378 mode=status _async_dir=/tmp/.ansible_async Feb 20 02:59:27 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 5.b scrub ok Feb 20 02:59:28 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 5.4 scrub starts Feb 20 02:59:28 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 5.4 scrub ok Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977344513s) [2,1,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.768676758s@ mbc={}] start_peering_interval up [0,4,2] -> [2,1,3], acting [0,4,2] -> [2,1,3], acting_primary 0 -> 2, up_primary 0 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977355957s) [4,5,0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.768676758s@ mbc={}] start_peering_interval up [0,4,2] -> [4,5,0], acting [0,4,2] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975776672s) [4,3,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.767089844s@ mbc={}] start_peering_interval up [0,4,2] -> [4,3,2], acting [0,4,2] -> [4,3,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977271080s) [4,5,0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.768676758s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977344513s) [2,1,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown pruub 1192.768676758s@ mbc={}] state: transitioning to Primary Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975675583s) [4,3,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.767089844s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974117279s) [3,5,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.765991211s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974077225s) [3,5,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.765991211s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1a( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973565102s) [4,2,0] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.765380859s@ mbc={}] start_peering_interval up [0,4,2] -> [4,2,0], acting [0,4,2] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980652809s) [3,4,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.772583008s@ mbc={}] start_peering_interval up [0,4,2] -> [3,4,5], acting [0,4,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1a( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973491669s) [4,2,0] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.765380859s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980609894s) [3,4,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.772583008s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.16( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975419044s) [0,1,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.767700195s@ mbc={}] start_peering_interval up [0,4,2] -> [0,1,5], acting [0,4,2] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973875046s) [4,5,0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.765991211s@ mbc={}] start_peering_interval up [0,4,2] -> [4,5,0], acting [0,4,2] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.16( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975357056s) [0,1,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.767700195s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974576950s) [5,0,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.766845703s@ mbc={}] start_peering_interval up [0,4,2] -> [5,0,1], acting [0,4,2] -> [5,0,1], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.10( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974278450s) [0,2,4] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.766601562s@ mbc={}] start_peering_interval up [0,4,2] -> [0,2,4], acting [0,4,2] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973673820s) [4,5,0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.765991211s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974539757s) [5,0,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.766845703s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.10( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974221230s) [0,2,4] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.766601562s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975828171s) [3,5,4] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.768432617s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,4], acting [0,4,2] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.12( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974387169s) [5,4,0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.766967773s@ mbc={}] start_peering_interval up [0,4,2] -> [5,4,0], acting [0,4,2] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975780487s) [3,5,4] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.768432617s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.12( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974349976s) [5,4,0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.766967773s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978821754s) [3,2,1] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.771728516s@ mbc={}] start_peering_interval up [0,4,2] -> [3,2,1], acting [0,4,2] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978788376s) [3,2,1] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.771728516s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976054192s) [1,3,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.768920898s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976021767s) [1,3,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.768920898s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.979237556s) [3,1,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.772216797s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,5], acting [0,4,2] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974530220s) [4,3,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.767578125s@ mbc={}] start_peering_interval up [0,4,2] -> [4,3,2], acting [0,4,2] -> [4,3,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.979186058s) [3,1,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.772216797s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974473000s) [4,3,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.767578125s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975277901s) [1,3,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.768432617s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975253105s) [1,3,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.768432617s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.18( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975271225s) [0,1,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.768676758s@ mbc={}] start_peering_interval up [0,4,2] -> [0,1,2], acting [0,4,2] -> [0,1,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.18( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975227356s) [0,1,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.768676758s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975286484s) [1,2,3] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.768798828s@ mbc={}] start_peering_interval up [0,4,2] -> [1,2,3], acting [0,4,2] -> [1,2,3], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1b( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974956512s) [5,1,0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.768554688s@ mbc={}] start_peering_interval up [0,4,2] -> [5,1,0], acting [0,4,2] -> [5,1,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975252151s) [3,4,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.768798828s@ mbc={}] start_peering_interval up [0,4,2] -> [3,4,5], acting [0,4,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975242615s) [1,2,3] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.768798828s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976854324s) [3,1,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.770507812s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,2], acting [0,4,2] -> [3,1,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975181580s) [3,4,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.768798828s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1b( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974897385s) [5,1,0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.768554688s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976805687s) [3,1,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.770507812s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.19( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974721909s) [1,3,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.768798828s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976756096s) [3,1,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.770874023s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,5], acting [0,4,2] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975717545s) [0,2,4] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.769897461s@ mbc={}] start_peering_interval up [0,4,2] -> [0,2,4], acting [0,4,2] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976720810s) [3,1,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.770874023s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.19( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974680901s) [1,3,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.768798828s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975666046s) [0,2,4] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.769897461s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974069595s) [3,5,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.768432617s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974045753s) [3,5,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.768432617s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976036072s) [5,1,3] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.770385742s@ mbc={}] start_peering_interval up [0,4,2] -> [5,1,3], acting [0,4,2] -> [5,1,3], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975977898s) [5,1,3] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.770385742s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975389481s) [4,0,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.770019531s@ mbc={}] start_peering_interval up [0,4,2] -> [4,0,2], acting [0,4,2] -> [4,0,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977050781s) [5,3,4] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.771606445s@ mbc={}] start_peering_interval up [0,4,2] -> [5,3,4], acting [0,4,2] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977016449s) [5,3,4] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.771606445s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976560593s) [3,5,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.771240234s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976503372s) [3,5,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.771240234s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973896027s) [4,2,0] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1192.768920898s@ mbc={}] start_peering_interval up [0,4,2] -> [4,2,0], acting [0,4,2] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973790169s) [4,2,0] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.768920898s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975330353s) [4,0,2] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1192.770019531s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,1,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.970548630s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1188.737915039s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.970490456s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.737915039s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.971279144s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1188.738891602s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.971220016s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.738891602s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.970304489s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1188.738159180s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.970345497s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1188.738159180s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.970217705s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.738159180s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.970178604s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.738159180s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.1( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.969364166s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1188.737304688s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.970792770s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1188.738769531s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.1( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.969302177s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.737304688s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.970736504s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.738769531s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.970877647s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1188.739013672s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.970828056s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.739013672s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.970205307s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1188.738403320s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.970073700s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.738403320s@ mbc={}] state: transitioning to Stray Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,0,1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.12( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,4,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,3,4] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:59:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.1b( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,1,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 02:59:29 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.18 scrub starts Feb 20 02:59:29 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.18 scrub ok Feb 20 02:59:30 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[7.d( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,5,0] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,5,0] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[7.7( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.16( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [0,1,5] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,5,1] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[31981]: osd.2 pg_epoch: 45 pg[7.b( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,1,5] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,1,5] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,5,1] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,4,5] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,5,4] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,5,1] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,4,5] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:30 localhost ceph-osd[31981]: osd.2 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [2,1,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,1,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,0,1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 46 pg[6.1b( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,1,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 46 pg[6.12( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,4,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:59:30 localhost ceph-osd[32921]: osd.5 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,3,4] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 02:59:31 localhost puppet-user[58400]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 02:59:31 localhost puppet-user[58400]: (file: /etc/puppet/hiera.yaml) Feb 20 02:59:31 localhost puppet-user[58400]: Warning: Undefined variable '::deploy_config_name'; Feb 20 02:59:31 localhost puppet-user[58400]: (file & line not available) Feb 20 02:59:31 localhost puppet-user[58400]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 02:59:31 localhost puppet-user[58400]: (file & line not available) Feb 20 02:59:31 localhost puppet-user[58400]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 20 02:59:31 localhost puppet-user[58400]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 20 02:59:31 localhost puppet-user[58400]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.11 seconds Feb 20 02:59:31 localhost puppet-user[58400]: Notice: Applied catalog in 0.06 seconds Feb 20 02:59:31 localhost puppet-user[58400]: Application: Feb 20 02:59:31 localhost puppet-user[58400]: Initial environment: production Feb 20 02:59:31 localhost puppet-user[58400]: Converged environment: production Feb 20 02:59:31 localhost puppet-user[58400]: Run mode: user Feb 20 02:59:31 localhost puppet-user[58400]: Changes: Feb 20 02:59:31 localhost puppet-user[58400]: Events: Feb 20 02:59:31 localhost puppet-user[58400]: Resources: Feb 20 02:59:31 localhost puppet-user[58400]: Total: 10 Feb 20 02:59:31 localhost puppet-user[58400]: Time: Feb 20 02:59:31 localhost puppet-user[58400]: Schedule: 0.00 Feb 20 02:59:31 localhost puppet-user[58400]: File: 0.00 Feb 20 02:59:31 localhost puppet-user[58400]: Exec: 0.01 Feb 20 02:59:31 localhost puppet-user[58400]: Augeas: 0.02 Feb 20 02:59:31 localhost puppet-user[58400]: Transaction evaluation: 0.04 Feb 20 02:59:31 localhost puppet-user[58400]: Catalog application: 0.06 Feb 20 02:59:31 localhost puppet-user[58400]: Config retrieval: 0.15 Feb 20 02:59:31 localhost puppet-user[58400]: Last run: 1771574371 Feb 20 02:59:31 localhost puppet-user[58400]: Filebucket: 0.00 Feb 20 02:59:31 localhost puppet-user[58400]: Total: 0.07 Feb 20 02:59:31 localhost puppet-user[58400]: Version: Feb 20 02:59:31 localhost puppet-user[58400]: Config: 1771574371 Feb 20 02:59:31 localhost puppet-user[58400]: Puppet: 7.10.0 Feb 20 02:59:31 localhost ansible-async_wrapper.py[58382]: Module complete (58382) Feb 20 02:59:32 localhost ansible-async_wrapper.py[58381]: Done in kid B. Feb 20 02:59:34 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.3 deep-scrub starts Feb 20 02:59:35 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.15 scrub starts Feb 20 02:59:35 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.15 scrub ok Feb 20 02:59:37 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts Feb 20 02:59:37 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok Feb 20 02:59:37 localhost python3[58655]: ansible-ansible.legacy.async_status Invoked with jid=509088309516.58378 mode=status _async_dir=/tmp/.ansible_async Feb 20 02:59:38 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.d scrub starts Feb 20 02:59:38 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 2.d scrub ok Feb 20 02:59:38 localhost python3[58672]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 02:59:38 localhost python3[58688]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:59:39 localhost python3[58738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:59:39 localhost ceph-osd[32921]: osd.5 pg_epoch: 47 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.808987617s) [3,5,1] r=1 lpr=47 pi=[43,47)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1196.739501953s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:39 localhost ceph-osd[32921]: osd.5 pg_epoch: 47 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.808895111s) [3,5,1] r=1 lpr=47 pi=[43,47)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1196.739501953s@ mbc={}] state: transitioning to Stray Feb 20 02:59:39 localhost ceph-osd[32921]: osd.5 pg_epoch: 47 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.808096886s) [3,5,1] r=1 lpr=47 pi=[43,47)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1196.739501953s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:39 localhost ceph-osd[32921]: osd.5 pg_epoch: 47 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.808045387s) [3,5,1] r=1 lpr=47 pi=[43,47)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1196.739501953s@ mbc={}] state: transitioning to Stray Feb 20 02:59:39 localhost ceph-osd[32921]: osd.5 pg_epoch: 47 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.807971954s) [3,5,1] r=1 lpr=47 pi=[43,47)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1196.739501953s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:39 localhost ceph-osd[32921]: osd.5 pg_epoch: 47 pg[7.2( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.806959152s) [3,5,1] r=1 lpr=47 pi=[43,47)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1196.738525391s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:39 localhost ceph-osd[32921]: osd.5 pg_epoch: 47 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.807852745s) [3,5,1] r=1 lpr=47 pi=[43,47)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1196.739501953s@ mbc={}] state: transitioning to Stray Feb 20 02:59:39 localhost ceph-osd[32921]: osd.5 pg_epoch: 47 pg[7.2( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.806815147s) [3,5,1] r=1 lpr=47 pi=[43,47)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1196.738525391s@ mbc={}] state: transitioning to Stray Feb 20 02:59:39 localhost python3[58756]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpr8ygst4y recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 02:59:40 localhost python3[58786]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:41 localhost python3[58889]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 20 02:59:41 localhost ceph-osd[31981]: osd.2 pg_epoch: 49 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=45/46 n=2 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.804322243s) [3,4,2] r=2 lpr=49 pi=[45,49)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1206.824951172s@ mbc={}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:41 localhost ceph-osd[31981]: osd.2 pg_epoch: 49 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.805153847s) [3,4,2] r=2 lpr=49 pi=[45,49)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1206.825927734s@ mbc={}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:41 localhost ceph-osd[31981]: osd.2 pg_epoch: 49 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.805092812s) [3,4,2] r=2 lpr=49 pi=[45,49)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1206.825561523s@ mbc={}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:41 localhost ceph-osd[31981]: osd.2 pg_epoch: 49 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.805083275s) [3,4,2] r=2 lpr=49 pi=[45,49)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1206.825927734s@ mbc={}] state: transitioning to Stray Feb 20 02:59:41 localhost ceph-osd[31981]: osd.2 pg_epoch: 49 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.804471016s) [3,4,2] r=2 lpr=49 pi=[45,49)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1206.825561523s@ mbc={}] state: transitioning to Stray Feb 20 02:59:41 localhost ceph-osd[31981]: osd.2 pg_epoch: 49 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=45/46 n=2 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.804249763s) [3,4,2] r=2 lpr=49 pi=[45,49)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1206.824951172s@ mbc={}] state: transitioning to Stray Feb 20 02:59:41 localhost ceph-osd[31981]: osd.2 pg_epoch: 49 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.808148384s) [3,4,2] r=2 lpr=49 pi=[45,49)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1206.829345703s@ mbc={}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:41 localhost ceph-osd[31981]: osd.2 pg_epoch: 49 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.807715416s) [3,4,2] r=2 lpr=49 pi=[45,49)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1206.829345703s@ mbc={}] state: transitioning to Stray Feb 20 02:59:42 localhost python3[58908]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:42 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 6.1 scrub starts Feb 20 02:59:42 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 6.1 scrub ok Feb 20 02:59:42 localhost python3[58940]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 02:59:44 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 6.12 scrub starts Feb 20 02:59:44 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 6.12 scrub ok Feb 20 02:59:44 localhost python3[58990]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:59:44 localhost python3[59008]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:45 localhost python3[59070]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:59:45 localhost python3[59088]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:46 localhost python3[59150]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:59:46 localhost python3[59168]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:46 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 6.1c scrub starts Feb 20 02:59:46 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 6.1c scrub ok Feb 20 02:59:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 02:59:46 localhost podman[59231]: 2026-02-20 07:59:46.832953448 +0000 UTC m=+0.079823159 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, config_id=tripleo_step1, vcs-type=git, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 02:59:46 localhost python3[59230]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:59:47 localhost podman[59231]: 2026-02-20 07:59:47.085817652 +0000 UTC m=+0.332687393 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, architecture=x86_64, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 02:59:47 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 02:59:47 localhost python3[59275]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 02:59:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4344 writes, 20K keys, 4344 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4344 writes, 364 syncs, 11.93 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1087 writes, 3859 keys, 1087 commit groups, 1.0 writes per commit group, ingest: 1.64 MB, 0.00 MB/s#012Interval WAL: 1087 writes, 220 syncs, 4.94 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d37d6522d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d37d6522d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memta Feb 20 02:59:47 localhost python3[59309]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:59:47 localhost systemd[1]: Reloading. Feb 20 02:59:47 localhost systemd-rc-local-generator[59330]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:59:47 localhost systemd-sysv-generator[59334]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:59:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:59:48 localhost python3[59395]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:59:48 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.0 scrub starts Feb 20 02:59:48 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 5.0 scrub ok Feb 20 02:59:48 localhost python3[59413]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:49 localhost python3[59475]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 02:59:49 localhost python3[59493]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:49 localhost ceph-osd[32921]: osd.5 pg_epoch: 51 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=14.583863258s) [0,1,2] r=-1 lpr=51 pi=[43,51)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1212.738281250s@ mbc={}] start_peering_interval up [1,5,3] -> [0,1,2], acting [1,5,3] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:49 localhost ceph-osd[32921]: osd.5 pg_epoch: 51 pg[7.4( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=14.584519386s) [0,1,2] r=-1 lpr=51 pi=[43,51)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1212.739135742s@ mbc={}] start_peering_interval up [1,5,3] -> [0,1,2], acting [1,5,3] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:49 localhost ceph-osd[32921]: osd.5 pg_epoch: 51 pg[7.4( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=14.584453583s) [0,1,2] r=-1 lpr=51 pi=[43,51)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1212.739135742s@ mbc={}] state: transitioning to Stray Feb 20 02:59:49 localhost ceph-osd[32921]: osd.5 pg_epoch: 51 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=14.583782196s) [0,1,2] r=-1 lpr=51 pi=[43,51)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1212.738281250s@ mbc={}] state: transitioning to Stray Feb 20 02:59:50 localhost python3[59523]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 02:59:50 localhost systemd[1]: Reloading. Feb 20 02:59:50 localhost systemd-rc-local-generator[59548]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 02:59:50 localhost systemd-sysv-generator[59553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 02:59:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 02:59:50 localhost systemd[1]: Starting Create netns directory... Feb 20 02:59:50 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 20 02:59:50 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 20 02:59:50 localhost systemd[1]: Finished Create netns directory. Feb 20 02:59:50 localhost ceph-osd[31981]: osd.2 pg_epoch: 51 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=51) [0,1,2] r=2 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:50 localhost ceph-osd[31981]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=51) [0,1,2] r=2 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 02:59:50 localhost python3[59581]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 20 02:59:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 02:59:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4819 writes, 21K keys, 4819 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4819 writes, 472 syncs, 10.21 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1432 writes, 5300 keys, 1432 commit groups, 1.0 writes per commit group, ingest: 2.02 MB, 0.00 MB/s#012Interval WAL: 1432 writes, 274 syncs, 5.23 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bae83ca2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bae83ca2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 m Feb 20 02:59:52 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.3 deep-scrub starts Feb 20 02:59:52 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 4.3 deep-scrub ok Feb 20 02:59:52 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 6.1e scrub starts Feb 20 02:59:52 localhost python3[59640]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 20 02:59:52 localhost podman[59720]: 2026-02-20 07:59:52.740396156 +0000 UTC m=+0.071020542 container create 3ab06d587bb78b05a3c02c4e16a48cd22640a4b091164703b0269af4a04bf429 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, container_name=nova_compute_init_log, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step2, url=https://www.redhat.com, tcib_managed=true, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Feb 20 02:59:52 localhost podman[59719]: 2026-02-20 07:59:52.748601998 +0000 UTC m=+0.091009126 container create edd8318854f50231cb7ecf3a03b7f277eb362bf286a463e846fdeb5f56cfb6ec (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=nova_virtqemud_init_logs, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, release=1766032510, name=rhosp-rhel9/openstack-nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.buildah.version=1.41.5, config_id=tripleo_step2, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 02:59:52 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 6.1e scrub ok Feb 20 02:59:52 localhost systemd[1]: Started libpod-conmon-edd8318854f50231cb7ecf3a03b7f277eb362bf286a463e846fdeb5f56cfb6ec.scope. Feb 20 02:59:52 localhost podman[59719]: 2026-02-20 07:59:52.697959361 +0000 UTC m=+0.040366489 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 02:59:52 localhost podman[59720]: 2026-02-20 07:59:52.699770711 +0000 UTC m=+0.030395157 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 20 02:59:52 localhost systemd[1]: Started libcrun container. Feb 20 02:59:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf0641ad9d6432e8918b929f405ec6680a89f0297b971ef3a2ed74fd2cbbe9db/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Feb 20 02:59:52 localhost systemd[1]: Started libpod-conmon-3ab06d587bb78b05a3c02c4e16a48cd22640a4b091164703b0269af4a04bf429.scope. Feb 20 02:59:52 localhost systemd[1]: Started libcrun container. Feb 20 02:59:52 localhost podman[59719]: 2026-02-20 07:59:52.823300081 +0000 UTC m=+0.165707229 container init edd8318854f50231cb7ecf3a03b7f277eb362bf286a463e846fdeb5f56cfb6ec (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, url=https://www.redhat.com, container_name=nova_virtqemud_init_logs, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_id=tripleo_step2, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, name=rhosp-rhel9/openstack-nova-libvirt) Feb 20 02:59:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99cee2e144e706134d5237d4f7d6d09bb30e6b708f1102eb2faf72e33ddaf302/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Feb 20 02:59:52 localhost podman[59719]: 2026-02-20 07:59:52.834690212 +0000 UTC m=+0.177097340 container start edd8318854f50231cb7ecf3a03b7f277eb362bf286a463e846fdeb5f56cfb6ec (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, build-date=2026-01-12T23:31:49Z, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, container_name=nova_virtqemud_init_logs, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2) Feb 20 02:59:52 localhost python3[59640]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1771573229 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm Feb 20 02:59:52 localhost podman[59720]: 2026-02-20 07:59:52.842850902 +0000 UTC m=+0.173475318 container init 3ab06d587bb78b05a3c02c4e16a48cd22640a4b091164703b0269af4a04bf429 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.13, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, batch=17.1_20260112.1, tcib_managed=true, container_name=nova_compute_init_log, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step2, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 02:59:52 localhost systemd[1]: libpod-edd8318854f50231cb7ecf3a03b7f277eb362bf286a463e846fdeb5f56cfb6ec.scope: Deactivated successfully. Feb 20 02:59:52 localhost podman[59720]: 2026-02-20 07:59:52.853694658 +0000 UTC m=+0.184319054 container start 3ab06d587bb78b05a3c02c4e16a48cd22640a4b091164703b0269af4a04bf429 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute_init_log, architecture=x86_64, vendor=Red Hat, Inc., release=1766032510, vcs-type=git, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step2, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Feb 20 02:59:52 localhost python3[59640]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1771573229 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova Feb 20 02:59:52 localhost systemd[1]: libpod-3ab06d587bb78b05a3c02c4e16a48cd22640a4b091164703b0269af4a04bf429.scope: Deactivated successfully. Feb 20 02:59:52 localhost podman[59758]: 2026-02-20 07:59:52.902191625 +0000 UTC m=+0.042640123 container died edd8318854f50231cb7ecf3a03b7f277eb362bf286a463e846fdeb5f56cfb6ec (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtqemud_init_logs, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:31:49Z, build-date=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 02:59:52 localhost podman[59758]: 2026-02-20 07:59:52.928475655 +0000 UTC m=+0.068924103 container cleanup edd8318854f50231cb7ecf3a03b7f277eb362bf286a463e846fdeb5f56cfb6ec (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.buildah.version=1.41.5, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step2, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13) Feb 20 02:59:52 localhost systemd[1]: libpod-conmon-edd8318854f50231cb7ecf3a03b7f277eb362bf286a463e846fdeb5f56cfb6ec.scope: Deactivated successfully. Feb 20 02:59:53 localhost podman[59770]: 2026-02-20 07:59:53.070843216 +0000 UTC m=+0.203336940 container died 3ab06d587bb78b05a3c02c4e16a48cd22640a4b091164703b0269af4a04bf429 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, container_name=nova_compute_init_log, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, managed_by=tripleo_ansible, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc.) Feb 20 02:59:53 localhost podman[59771]: 2026-02-20 07:59:53.141073025 +0000 UTC m=+0.268523397 container cleanup 3ab06d587bb78b05a3c02c4e16a48cd22640a4b091164703b0269af4a04bf429 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=nova_compute_init_log, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step2, build-date=2026-01-12T23:32:04Z) Feb 20 02:59:53 localhost systemd[1]: libpod-conmon-3ab06d587bb78b05a3c02c4e16a48cd22640a4b091164703b0269af4a04bf429.scope: Deactivated successfully. Feb 20 02:59:53 localhost podman[59908]: 2026-02-20 07:59:53.47279821 +0000 UTC m=+0.076378183 container create 841df66ed3c80cc173958fcbf60607a4bd5423aa63d3feda8ff212e4e92beb50 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=create_haproxy_wrapper, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, batch=17.1_20260112.1, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 02:59:53 localhost podman[59909]: 2026-02-20 07:59:53.500860121 +0000 UTC m=+0.094592826 container create 1dd186e54d6dbff2c91eff6946d30a7d86f06233f0362ad3552323dfcd07bca4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:31:49Z, distribution-scope=public, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=create_virtlogd_wrapper, config_id=tripleo_step2, io.buildah.version=1.41.5) Feb 20 02:59:53 localhost systemd[1]: Started libpod-conmon-841df66ed3c80cc173958fcbf60607a4bd5423aa63d3feda8ff212e4e92beb50.scope. Feb 20 02:59:53 localhost podman[59908]: 2026-02-20 07:59:53.433348829 +0000 UTC m=+0.036928882 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 20 02:59:53 localhost systemd[1]: Started libpod-conmon-1dd186e54d6dbff2c91eff6946d30a7d86f06233f0362ad3552323dfcd07bca4.scope. Feb 20 02:59:53 localhost podman[59909]: 2026-02-20 07:59:53.442851327 +0000 UTC m=+0.036584052 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 02:59:53 localhost systemd[1]: Started libcrun container. Feb 20 02:59:53 localhost systemd[1]: Started libcrun container. Feb 20 02:59:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d66b12f4ecdc1206f5d8681d2afba56bec38014828d09193ac04622a3a344a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 02:59:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3598b6036fe4ea28e5aee05f8a7797bf8d93e6536b1978b930e459fcf2b8a58/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff) Feb 20 02:59:53 localhost podman[59908]: 2026-02-20 07:59:53.559280997 +0000 UTC m=+0.162861010 container init 841df66ed3c80cc173958fcbf60607a4bd5423aa63d3feda8ff212e4e92beb50 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, batch=17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, container_name=create_haproxy_wrapper, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 20 02:59:53 localhost podman[59908]: 2026-02-20 07:59:53.568189248 +0000 UTC m=+0.171769251 container start 841df66ed3c80cc173958fcbf60607a4bd5423aa63d3feda8ff212e4e92beb50 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, distribution-scope=public, architecture=x86_64, config_id=tripleo_step2, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=create_haproxy_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 02:59:53 localhost podman[59908]: 2026-02-20 07:59:53.568524677 +0000 UTC m=+0.172104740 container attach 841df66ed3c80cc173958fcbf60607a4bd5423aa63d3feda8ff212e4e92beb50 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step2, release=1766032510, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=create_haproxy_wrapper, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 02:59:53 localhost podman[59909]: 2026-02-20 07:59:53.610771269 +0000 UTC m=+0.204503974 container init 1dd186e54d6dbff2c91eff6946d30a7d86f06233f0362ad3552323dfcd07bca4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=create_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, url=https://www.redhat.com, version=17.1.13, tcib_managed=true, vcs-type=git, build-date=2026-01-12T23:31:49Z) Feb 20 02:59:53 localhost podman[59909]: 2026-02-20 07:59:53.620211654 +0000 UTC m=+0.213944349 container start 1dd186e54d6dbff2c91eff6946d30a7d86f06233f0362ad3552323dfcd07bca4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, container_name=create_virtlogd_wrapper, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 02:59:53 localhost podman[59909]: 2026-02-20 07:59:53.620470321 +0000 UTC m=+0.214203016 container attach 1dd186e54d6dbff2c91eff6946d30a7d86f06233f0362ad3552323dfcd07bca4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, release=1766032510, tcib_managed=true, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, distribution-scope=public, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.13, container_name=create_virtlogd_wrapper, url=https://www.redhat.com, build-date=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step2) Feb 20 02:59:53 localhost systemd[1]: var-lib-containers-storage-overlay-cf0641ad9d6432e8918b929f405ec6680a89f0297b971ef3a2ed74fd2cbbe9db-merged.mount: Deactivated successfully. Feb 20 02:59:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-edd8318854f50231cb7ecf3a03b7f277eb362bf286a463e846fdeb5f56cfb6ec-userdata-shm.mount: Deactivated successfully. Feb 20 02:59:53 localhost systemd[1]: var-lib-containers-storage-overlay-99cee2e144e706134d5237d4f7d6d09bb30e6b708f1102eb2faf72e33ddaf302-merged.mount: Deactivated successfully. Feb 20 02:59:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ab06d587bb78b05a3c02c4e16a48cd22640a4b091164703b0269af4a04bf429-userdata-shm.mount: Deactivated successfully. Feb 20 02:59:55 localhost ovs-vsctl[60034]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Feb 20 02:59:55 localhost systemd[1]: libpod-1dd186e54d6dbff2c91eff6946d30a7d86f06233f0362ad3552323dfcd07bca4.scope: Deactivated successfully. Feb 20 02:59:55 localhost systemd[1]: libpod-1dd186e54d6dbff2c91eff6946d30a7d86f06233f0362ad3552323dfcd07bca4.scope: Consumed 2.028s CPU time. Feb 20 02:59:55 localhost podman[59909]: 2026-02-20 07:59:55.650947356 +0000 UTC m=+2.244680101 container died 1dd186e54d6dbff2c91eff6946d30a7d86f06233f0362ad3552323dfcd07bca4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step2, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, container_name=create_virtlogd_wrapper, batch=17.1_20260112.1, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, managed_by=tripleo_ansible) Feb 20 02:59:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1dd186e54d6dbff2c91eff6946d30a7d86f06233f0362ad3552323dfcd07bca4-userdata-shm.mount: Deactivated successfully. Feb 20 02:59:55 localhost systemd[1]: var-lib-containers-storage-overlay-c3598b6036fe4ea28e5aee05f8a7797bf8d93e6536b1978b930e459fcf2b8a58-merged.mount: Deactivated successfully. Feb 20 02:59:55 localhost podman[60161]: 2026-02-20 07:59:55.741319681 +0000 UTC m=+0.081458205 container cleanup 1dd186e54d6dbff2c91eff6946d30a7d86f06233f0362ad3552323dfcd07bca4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.buildah.version=1.41.5, architecture=x86_64, vcs-type=git, container_name=create_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 02:59:55 localhost systemd[1]: libpod-conmon-1dd186e54d6dbff2c91eff6946d30a7d86f06233f0362ad3552323dfcd07bca4.scope: Deactivated successfully. Feb 20 02:59:55 localhost python3[59640]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1771573229 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper Feb 20 02:59:55 localhost ceph-osd[31981]: osd.2 pg_epoch: 53 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=45/46 n=2 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=14.474046707s) [2,0,4] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1222.825561523s@ mbc={}] start_peering_interval up [4,2,3] -> [2,0,4], acting [4,2,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:55 localhost ceph-osd[31981]: osd.2 pg_epoch: 53 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=45/46 n=2 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=14.474046707s) [2,0,4] r=0 lpr=53 pi=[45,53)/1 crt=40'39 mlcod 0'0 unknown pruub 1222.825561523s@ mbc={}] state: transitioning to Primary Feb 20 02:59:55 localhost ceph-osd[31981]: osd.2 pg_epoch: 53 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=14.473728180s) [2,0,4] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1222.826293945s@ mbc={}] start_peering_interval up [4,2,3] -> [2,0,4], acting [4,2,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:55 localhost ceph-osd[31981]: osd.2 pg_epoch: 53 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=14.473728180s) [2,0,4] r=0 lpr=53 pi=[45,53)/1 crt=40'39 mlcod 0'0 unknown pruub 1222.826293945s@ mbc={}] state: transitioning to Primary Feb 20 02:59:56 localhost systemd[1]: libpod-841df66ed3c80cc173958fcbf60607a4bd5423aa63d3feda8ff212e4e92beb50.scope: Deactivated successfully. Feb 20 02:59:56 localhost systemd[1]: libpod-841df66ed3c80cc173958fcbf60607a4bd5423aa63d3feda8ff212e4e92beb50.scope: Consumed 2.003s CPU time. Feb 20 02:59:56 localhost podman[59908]: 2026-02-20 07:59:56.783206654 +0000 UTC m=+3.386786637 container died 841df66ed3c80cc173958fcbf60607a4bd5423aa63d3feda8ff212e4e92beb50 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, architecture=x86_64, container_name=create_haproxy_wrapper, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, config_id=tripleo_step2, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.13, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 02:59:56 localhost systemd[1]: tmp-crun.y5Nxh2.mount: Deactivated successfully. Feb 20 02:59:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-841df66ed3c80cc173958fcbf60607a4bd5423aa63d3feda8ff212e4e92beb50-userdata-shm.mount: Deactivated successfully. Feb 20 02:59:56 localhost podman[60202]: 2026-02-20 07:59:56.87680677 +0000 UTC m=+0.084687286 container cleanup 841df66ed3c80cc173958fcbf60607a4bd5423aa63d3feda8ff212e4e92beb50 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=create_haproxy_wrapper, release=1766032510, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, version=17.1.13, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 02:59:56 localhost systemd[1]: libpod-conmon-841df66ed3c80cc173958fcbf60607a4bd5423aa63d3feda8ff212e4e92beb50.scope: Deactivated successfully. Feb 20 02:59:56 localhost python3[59640]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers Feb 20 02:59:57 localhost ceph-osd[31981]: osd.2 pg_epoch: 54 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=53/54 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=53) [2,0,4] r=0 lpr=53 pi=[45,53)/1 crt=40'39 mlcod 0'0 active+degraded mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Feb 20 02:59:57 localhost ceph-osd[31981]: osd.2 pg_epoch: 54 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=53/54 n=2 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=53) [2,0,4] r=0 lpr=53 pi=[45,53)/1 crt=40'39 mlcod 0'0 active+degraded mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Feb 20 02:59:57 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 6.17 scrub starts Feb 20 02:59:57 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 6.17 scrub ok Feb 20 02:59:57 localhost python3[60259]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:57 localhost systemd[1]: var-lib-containers-storage-overlay-8d66b12f4ecdc1206f5d8681d2afba56bec38014828d09193ac04622a3a344a2-merged.mount: Deactivated successfully. Feb 20 02:59:58 localhost ceph-osd[32921]: osd.5 pg_epoch: 55 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=47/48 n=2 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=14.585224152s) [0,4,5] r=2 lpr=55 pi=[47,55)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1220.990966797s@ mbc={}] start_peering_interval up [3,5,1] -> [0,4,5], acting [3,5,1] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:58 localhost ceph-osd[32921]: osd.5 pg_epoch: 55 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=47/48 n=2 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=14.585143089s) [0,4,5] r=2 lpr=55 pi=[47,55)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1220.990966797s@ mbc={}] state: transitioning to Stray Feb 20 02:59:58 localhost ceph-osd[32921]: osd.5 pg_epoch: 55 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=47/48 n=1 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=14.584755898s) [0,4,5] r=2 lpr=55 pi=[47,55)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1220.990844727s@ mbc={}] start_peering_interval up [3,5,1] -> [0,4,5], acting [3,5,1] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 02:59:58 localhost ceph-osd[32921]: osd.5 pg_epoch: 55 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=47/48 n=1 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=14.584648132s) [0,4,5] r=2 lpr=55 pi=[47,55)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1220.990844727s@ mbc={}] state: transitioning to Stray Feb 20 02:59:58 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 7.d scrub starts Feb 20 02:59:58 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 6.1b scrub starts Feb 20 02:59:58 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 7.d scrub ok Feb 20 02:59:59 localhost python3[60380]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005625202 step=2 update_config_hash_only=False Feb 20 02:59:59 localhost python3[60396]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 02:59:59 localhost python3[60412]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 20 03:00:00 localhost ceph-osd[31981]: osd.2 pg_epoch: 57 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=49/50 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.600479126s) [1,5,3] r=-1 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1227.059936523s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,3], acting [3,4,2] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 03:00:00 localhost ceph-osd[31981]: osd.2 pg_epoch: 57 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=49/50 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.600407600s) [1,5,3] r=-1 lpr=57 pi=[49,57)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1227.059936523s@ mbc={}] state: transitioning to Stray Feb 20 03:00:00 localhost ceph-osd[31981]: osd.2 pg_epoch: 57 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=49/50 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.599832535s) [1,5,3] r=-1 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1227.059814453s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,3], acting [3,4,2] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 03:00:00 localhost ceph-osd[31981]: osd.2 pg_epoch: 57 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=49/50 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.599744797s) [1,5,3] r=-1 lpr=57 pi=[49,57)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1227.059814453s@ mbc={}] state: transitioning to Stray Feb 20 03:00:00 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 6.1b scrub starts Feb 20 03:00:00 localhost ceph-osd[32921]: log_channel(cluster) log [DBG] : 6.1b scrub ok Feb 20 03:00:01 localhost ceph-osd[32921]: osd.5 pg_epoch: 57 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57) [1,5,3] r=1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 03:00:01 localhost ceph-osd[32921]: osd.5 pg_epoch: 57 pg[7.7( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57) [1,5,3] r=1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 03:00:02 localhost ceph-osd[32921]: osd.5 pg_epoch: 59 pg[7.8( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=59 pruub=10.250371933s) [3,4,5] r=2 lpr=59 pi=[43,59)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1220.740112305s@ mbc={}] start_peering_interval up [1,5,3] -> [3,4,5], acting [1,5,3] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 03:00:02 localhost ceph-osd[32921]: osd.5 pg_epoch: 59 pg[7.8( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=59 pruub=10.250320435s) [3,4,5] r=2 lpr=59 pi=[43,59)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1220.740112305s@ mbc={}] state: transitioning to Stray Feb 20 03:00:02 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 7.5 scrub starts Feb 20 03:00:02 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 7.5 scrub ok Feb 20 03:00:03 localhost sshd[60413]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:00:04 localhost sshd[60415]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:00:09 localhost ceph-osd[31981]: osd.2 pg_epoch: 61 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=8.979257584s) [0,2,4] r=1 lpr=61 pi=[45,61)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1230.829223633s@ mbc={}] start_peering_interval up [4,2,3] -> [0,2,4], acting [4,2,3] -> [0,2,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 03:00:09 localhost ceph-osd[31981]: osd.2 pg_epoch: 61 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=8.979157448s) [0,2,4] r=1 lpr=61 pi=[45,61)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1230.829223633s@ mbc={}] state: transitioning to Stray Feb 20 03:00:15 localhost ceph-osd[31981]: osd.2 pg_epoch: 63 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=63) [2,0,4] r=0 lpr=63 pi=[47,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Feb 20 03:00:15 localhost ceph-osd[32921]: osd.5 pg_epoch: 63 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=47/48 n=1 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=13.060770035s) [2,0,4] r=-1 lpr=63 pi=[47,63)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1236.991577148s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,4], acting [3,5,1] -> [2,0,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 03:00:15 localhost ceph-osd[32921]: osd.5 pg_epoch: 63 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=47/48 n=1 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=13.060679436s) [2,0,4] r=-1 lpr=63 pi=[47,63)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1236.991577148s@ mbc={}] state: transitioning to Stray Feb 20 03:00:16 localhost ceph-osd[31981]: osd.2 pg_epoch: 64 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=63/64 n=1 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=63) [2,0,4] r=0 lpr=63 pi=[47,63)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Feb 20 03:00:17 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 7.a scrub starts Feb 20 03:00:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:00:17 localhost ceph-osd[31981]: log_channel(cluster) log [DBG] : 7.a scrub ok Feb 20 03:00:17 localhost systemd[1]: tmp-crun.cBxd1Y.mount: Deactivated successfully. Feb 20 03:00:17 localhost podman[60417]: 2026-02-20 08:00:17.459507019 +0000 UTC m=+0.089710429 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, vcs-type=git, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:00:17 localhost ceph-osd[31981]: osd.2 pg_epoch: 65 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=49/50 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=65 pruub=13.057390213s) [3,1,2] r=2 lpr=65 pi=[49,65)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1243.065307617s@ mbc={}] start_peering_interval up [3,4,2] -> [3,1,2], acting [3,4,2] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 03:00:17 localhost ceph-osd[31981]: osd.2 pg_epoch: 65 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=49/50 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=65 pruub=13.057217598s) [3,1,2] r=2 lpr=65 pi=[49,65)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1243.065307617s@ mbc={}] state: transitioning to Stray Feb 20 03:00:17 localhost podman[60417]: 2026-02-20 08:00:17.653857774 +0000 UTC m=+0.284061214 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:00:17 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:00:19 localhost ceph-osd[31981]: osd.2 pg_epoch: 67 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=51/52 n=1 ec=43/36 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=11.173359871s) [1,3,2] r=2 lpr=67 pi=[51,67)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1243.238159180s@ mbc={}] start_peering_interval up [0,1,2] -> [1,3,2], acting [0,1,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 03:00:19 localhost ceph-osd[31981]: osd.2 pg_epoch: 67 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=51/52 n=1 ec=43/36 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=11.173268318s) [1,3,2] r=2 lpr=67 pi=[51,67)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1243.238159180s@ mbc={}] state: transitioning to Stray Feb 20 03:00:21 localhost ceph-osd[31981]: osd.2 pg_epoch: 69 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=53/54 n=2 ec=43/36 lis/c=53/53 les/c/f=54/54/0 sis=69 pruub=15.302712440s) [1,3,5] r=-1 lpr=69 pi=[53,69)/1 crt=40'39 mlcod 0'0 active pruub 1249.421508789s@ mbc={255={}}] start_peering_interval up [2,0,4] -> [1,3,5], acting [2,0,4] -> [1,3,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 03:00:21 localhost ceph-osd[31981]: osd.2 pg_epoch: 69 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=53/54 n=2 ec=43/36 lis/c=53/53 les/c/f=54/54/0 sis=69 pruub=15.302552223s) [1,3,5] r=-1 lpr=69 pi=[53,69)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1249.421508789s@ mbc={}] state: transitioning to Stray Feb 20 03:00:23 localhost ceph-osd[32921]: osd.5 pg_epoch: 69 pg[7.d( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=53/53 les/c/f=54/54/0 sis=69) [1,3,5] r=2 lpr=69 pi=[53,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Feb 20 03:00:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 71 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=55/56 n=1 ec=43/36 lis/c=55/55 les/c/f=56/56/0 sis=71 pruub=9.165117264s) [3,5,1] r=1 lpr=71 pi=[55,71)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1247.412109375s@ mbc={}] start_peering_interval up [0,4,5] -> [3,5,1], acting [0,4,5] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 03:00:29 localhost ceph-osd[32921]: osd.5 pg_epoch: 71 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=55/56 n=1 ec=43/36 lis/c=55/55 les/c/f=56/56/0 sis=71 pruub=9.165009499s) [3,5,1] r=1 lpr=71 pi=[55,71)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1247.412109375s@ mbc={}] state: transitioning to Stray Feb 20 03:00:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 73 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=57/58 n=1 ec=43/36 lis/c=57/57 les/c/f=58/58/0 sis=73 pruub=9.196537971s) [0,5,1] r=1 lpr=73 pi=[57,73)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1249.500732422s@ mbc={}] start_peering_interval up [1,5,3] -> [0,5,1], acting [1,5,3] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Feb 20 03:00:31 localhost ceph-osd[32921]: osd.5 pg_epoch: 73 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=57/58 n=1 ec=43/36 lis/c=57/57 les/c/f=58/58/0 sis=73 pruub=9.196439743s) [0,5,1] r=1 lpr=73 pi=[57,73)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1249.500732422s@ mbc={}] state: transitioning to Stray Feb 20 03:00:41 localhost sshd[60524]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:00:42 localhost sshd[60526]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:00:43 localhost sshd[60529]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:00:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:00:48 localhost podman[60531]: 2026-02-20 08:00:48.447156531 +0000 UTC m=+0.083546614 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 03:00:48 localhost podman[60531]: 2026-02-20 08:00:48.643136873 +0000 UTC m=+0.279526976 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_id=tripleo_step1) Feb 20 03:00:48 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:01:18 localhost sshd[60571]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:01:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:01:19 localhost systemd[1]: tmp-crun.I8kOC8.mount: Deactivated successfully. Feb 20 03:01:19 localhost podman[60573]: 2026-02-20 08:01:19.438274123 +0000 UTC m=+0.081962850 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, release=1766032510, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 20 03:01:19 localhost podman[60573]: 2026-02-20 08:01:19.659795592 +0000 UTC m=+0.303484269 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, release=1766032510, architecture=x86_64, url=https://www.redhat.com) Feb 20 03:01:19 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:01:21 localhost sshd[60602]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:01:40 localhost podman[60707]: 2026-02-20 08:01:40.269910798 +0000 UTC m=+0.089041115 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.buildah.version=1.42.2, version=7, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , ceph=True, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 03:01:40 localhost podman[60707]: 2026-02-20 08:01:40.375869755 +0000 UTC m=+0.195000022 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, GIT_BRANCH=main, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, release=1770267347, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph) Feb 20 03:01:47 localhost sshd[60849]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:01:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:01:50 localhost podman[60850]: 2026-02-20 08:01:50.44702717 +0000 UTC m=+0.082321020 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, managed_by=tripleo_ansible, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:01:50 localhost podman[60850]: 2026-02-20 08:01:50.640729245 +0000 UTC m=+0.276023065 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, distribution-scope=public) Feb 20 03:01:50 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:01:55 localhost sshd[60880]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:02:10 localhost sshd[60882]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:02:17 localhost sshd[60884]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:02:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:02:21 localhost systemd[1]: tmp-crun.dNuDea.mount: Deactivated successfully. Feb 20 03:02:21 localhost podman[60886]: 2026-02-20 08:02:21.43614262 +0000 UTC m=+0.076887931 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Feb 20 03:02:21 localhost podman[60886]: 2026-02-20 08:02:21.61910551 +0000 UTC m=+0.259850781 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, batch=17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z) Feb 20 03:02:21 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:02:23 localhost sshd[60914]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:02:37 localhost sshd[60916]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:02:45 localhost sshd[60994]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:02:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:02:52 localhost podman[60996]: 2026-02-20 08:02:52.44247541 +0000 UTC m=+0.082294728 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, version=17.1.13, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:02:52 localhost podman[60996]: 2026-02-20 08:02:52.629559515 +0000 UTC m=+0.269378823 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.expose-services=, release=1766032510, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., container_name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:02:52 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:03:10 localhost sshd[61027]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:03:20 localhost sshd[61029]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:03:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:03:23 localhost podman[61031]: 2026-02-20 08:03:23.433792491 +0000 UTC m=+0.077219508 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, build-date=2026-01-12T22:10:14Z, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:03:23 localhost podman[61031]: 2026-02-20 08:03:23.642037147 +0000 UTC m=+0.285464224 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible) Feb 20 03:03:23 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:03:46 localhost sshd[61136]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:03:48 localhost sshd[61138]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:03:53 localhost sshd[61140]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:03:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:03:54 localhost systemd[1]: tmp-crun.b8Lus6.mount: Deactivated successfully. Feb 20 03:03:54 localhost podman[61142]: 2026-02-20 08:03:54.085602687 +0000 UTC m=+0.087028317 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, version=17.1.13, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 20 03:03:54 localhost podman[61142]: 2026-02-20 08:03:54.297789021 +0000 UTC m=+0.299214671 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.13, release=1766032510, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:03:54 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:03:55 localhost sshd[61171]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:04:23 localhost sshd[61173]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:04:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:04:24 localhost podman[61175]: 2026-02-20 08:04:24.421869744 +0000 UTC m=+0.080964860 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, container_name=metrics_qdr) Feb 20 03:04:24 localhost podman[61175]: 2026-02-20 08:04:24.64788171 +0000 UTC m=+0.306976846 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:04:24 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:04:30 localhost python3[61251]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:04:30 localhost sshd[61252]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:04:31 localhost python3[61298]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574670.3636532-99233-156549039193464/source _original_basename=tmpft0iqrz1 follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:04:32 localhost python3[61328]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:04:33 localhost ansible-async_wrapper.py[61500]: Invoked with 493901332154 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574673.3852475-99452-161086309852680/AnsiballZ_command.py _ Feb 20 03:04:33 localhost ansible-async_wrapper.py[61503]: Starting module and watcher Feb 20 03:04:33 localhost ansible-async_wrapper.py[61503]: Start watching 61504 (3600) Feb 20 03:04:33 localhost ansible-async_wrapper.py[61504]: Start module (61504) Feb 20 03:04:33 localhost ansible-async_wrapper.py[61500]: Return async_wrapper task started. Feb 20 03:04:34 localhost python3[61524]: ansible-ansible.legacy.async_status Invoked with jid=493901332154.61500 mode=status _async_dir=/tmp/.ansible_async Feb 20 03:04:37 localhost puppet-user[61522]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 03:04:37 localhost puppet-user[61522]: (file: /etc/puppet/hiera.yaml) Feb 20 03:04:37 localhost puppet-user[61522]: Warning: Undefined variable '::deploy_config_name'; Feb 20 03:04:37 localhost puppet-user[61522]: (file & line not available) Feb 20 03:04:37 localhost puppet-user[61522]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 03:04:37 localhost puppet-user[61522]: (file & line not available) Feb 20 03:04:37 localhost puppet-user[61522]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 20 03:04:37 localhost puppet-user[61522]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 20 03:04:37 localhost puppet-user[61522]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.12 seconds Feb 20 03:04:37 localhost puppet-user[61522]: Notice: Applied catalog in 0.04 seconds Feb 20 03:04:37 localhost puppet-user[61522]: Application: Feb 20 03:04:37 localhost puppet-user[61522]: Initial environment: production Feb 20 03:04:37 localhost puppet-user[61522]: Converged environment: production Feb 20 03:04:37 localhost puppet-user[61522]: Run mode: user Feb 20 03:04:37 localhost puppet-user[61522]: Changes: Feb 20 03:04:37 localhost puppet-user[61522]: Events: Feb 20 03:04:37 localhost puppet-user[61522]: Resources: Feb 20 03:04:37 localhost puppet-user[61522]: Total: 10 Feb 20 03:04:37 localhost puppet-user[61522]: Time: Feb 20 03:04:37 localhost puppet-user[61522]: Schedule: 0.00 Feb 20 03:04:37 localhost puppet-user[61522]: File: 0.00 Feb 20 03:04:37 localhost puppet-user[61522]: Exec: 0.01 Feb 20 03:04:37 localhost puppet-user[61522]: Augeas: 0.01 Feb 20 03:04:37 localhost puppet-user[61522]: Transaction evaluation: 0.04 Feb 20 03:04:37 localhost puppet-user[61522]: Catalog application: 0.04 Feb 20 03:04:37 localhost puppet-user[61522]: Config retrieval: 0.16 Feb 20 03:04:37 localhost puppet-user[61522]: Last run: 1771574677 Feb 20 03:04:37 localhost puppet-user[61522]: Filebucket: 0.00 Feb 20 03:04:37 localhost puppet-user[61522]: Total: 0.05 Feb 20 03:04:37 localhost puppet-user[61522]: Version: Feb 20 03:04:37 localhost puppet-user[61522]: Config: 1771574677 Feb 20 03:04:37 localhost puppet-user[61522]: Puppet: 7.10.0 Feb 20 03:04:37 localhost ansible-async_wrapper.py[61504]: Module complete (61504) Feb 20 03:04:38 localhost ansible-async_wrapper.py[61503]: Done in kid B. Feb 20 03:04:44 localhost python3[61651]: ansible-ansible.legacy.async_status Invoked with jid=493901332154.61500 mode=status _async_dir=/tmp/.ansible_async Feb 20 03:04:45 localhost python3[61697]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 03:04:45 localhost python3[61733]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:04:46 localhost python3[61795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:04:46 localhost python3[61828]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpqqi4e0o4 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 03:04:47 localhost python3[61858]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:04:48 localhost python3[61961]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 20 03:04:49 localhost python3[61980]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:04:50 localhost python3[62012]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:04:51 localhost python3[62062]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:04:51 localhost python3[62080]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:04:52 localhost python3[62142]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:04:52 localhost python3[62160]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:04:52 localhost python3[62222]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:04:53 localhost python3[62240]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:04:53 localhost python3[62302]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:04:54 localhost python3[62320]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:04:54 localhost python3[62350]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:04:54 localhost systemd[1]: Reloading. Feb 20 03:04:54 localhost systemd-sysv-generator[62377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:04:54 localhost systemd-rc-local-generator[62372]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:04:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:04:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:04:54 localhost systemd[1]: tmp-crun.1zsVMc.mount: Deactivated successfully. Feb 20 03:04:54 localhost podman[62387]: 2026-02-20 08:04:54.987984441 +0000 UTC m=+0.091907372 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:04:55 localhost podman[62387]: 2026-02-20 08:04:55.182837358 +0000 UTC m=+0.286760329 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1) Feb 20 03:04:55 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:04:55 localhost python3[62463]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:04:55 localhost python3[62481]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:04:56 localhost python3[62543]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:04:56 localhost python3[62561]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:04:57 localhost python3[62591]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:04:57 localhost systemd[1]: Reloading. Feb 20 03:04:57 localhost systemd-rc-local-generator[62613]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:04:57 localhost systemd-sysv-generator[62619]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:04:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:04:57 localhost systemd[1]: Starting Create netns directory... Feb 20 03:04:57 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 20 03:04:57 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 20 03:04:57 localhost systemd[1]: Finished Create netns directory. Feb 20 03:04:58 localhost python3[62648]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 20 03:05:00 localhost python3[62706]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 20 03:05:00 localhost podman[62865]: 2026-02-20 08:05:00.592704201 +0000 UTC m=+0.062246686 container create 9ed6e63dfefafc84ffc96905c6aabac3c27fb1a354f81c2521ed930f00af6929 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_statedir_owner, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:05:00 localhost podman[62867]: 2026-02-20 08:05:00.627451968 +0000 UTC m=+0.092010195 container create e882aec8ac81713226433fb446c87955abd8068bc0df9ee27ea66e7dbdebfa2e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtlogd_wrapper, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 20 03:05:00 localhost systemd[1]: Started libpod-conmon-9ed6e63dfefafc84ffc96905c6aabac3c27fb1a354f81c2521ed930f00af6929.scope. Feb 20 03:05:00 localhost systemd[1]: Started libcrun container. Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cdfebf73fa27663ea59e76077217565890d2529b0a0ce1451baa620ea47f0d4/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cdfebf73fa27663ea59e76077217565890d2529b0a0ce1451baa620ea47f0d4/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cdfebf73fa27663ea59e76077217565890d2529b0a0ce1451baa620ea47f0d4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost podman[62865]: 2026-02-20 08:05:00.651597473 +0000 UTC m=+0.121139968 container init 9ed6e63dfefafc84ffc96905c6aabac3c27fb1a354f81c2521ed930f00af6929 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_statedir_owner, version=17.1.13, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step3, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:05:00 localhost systemd[1]: Started libpod-conmon-e882aec8ac81713226433fb446c87955abd8068bc0df9ee27ea66e7dbdebfa2e.scope. Feb 20 03:05:00 localhost podman[62865]: 2026-02-20 08:05:00.660217221 +0000 UTC m=+0.129759716 container start 9ed6e63dfefafc84ffc96905c6aabac3c27fb1a354f81c2521ed930f00af6929 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_statedir_owner, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, vendor=Red Hat, Inc., config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, build-date=2026-01-12T23:32:04Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Feb 20 03:05:00 localhost podman[62865]: 2026-02-20 08:05:00.660460327 +0000 UTC m=+0.130002852 container attach 9ed6e63dfefafc84ffc96905c6aabac3c27fb1a354f81c2521ed930f00af6929 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_statedir_owner, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:05:00 localhost podman[62865]: 2026-02-20 08:05:00.562240032 +0000 UTC m=+0.031782527 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 20 03:05:00 localhost podman[62884]: 2026-02-20 08:05:00.677798235 +0000 UTC m=+0.131590016 container create 64603627f45c0b5ecc8d6ff4d79eef5250527e1edfa739afb073a3d30995aedf (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_init_log, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:05:00 localhost systemd[1]: Started libcrun container. Feb 20 03:05:00 localhost podman[62884]: 2026-02-20 08:05:00.583030034 +0000 UTC m=+0.036821825 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341ab3ab9b7393cc993bae384b639c3b914f3b8ebf8717f3f9f4cc78c6224645/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost podman[62867]: 2026-02-20 08:05:00.583677122 +0000 UTC m=+0.048235399 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341ab3ab9b7393cc993bae384b639c3b914f3b8ebf8717f3f9f4cc78c6224645/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341ab3ab9b7393cc993bae384b639c3b914f3b8ebf8717f3f9f4cc78c6224645/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341ab3ab9b7393cc993bae384b639c3b914f3b8ebf8717f3f9f4cc78c6224645/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341ab3ab9b7393cc993bae384b639c3b914f3b8ebf8717f3f9f4cc78c6224645/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341ab3ab9b7393cc993bae384b639c3b914f3b8ebf8717f3f9f4cc78c6224645/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341ab3ab9b7393cc993bae384b639c3b914f3b8ebf8717f3f9f4cc78c6224645/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost podman[62867]: 2026-02-20 08:05:00.692798837 +0000 UTC m=+0.157357064 container init e882aec8ac81713226433fb446c87955abd8068bc0df9ee27ea66e7dbdebfa2e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, container_name=nova_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:31:49Z, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 20 03:05:00 localhost systemd[1]: libpod-9ed6e63dfefafc84ffc96905c6aabac3c27fb1a354f81c2521ed930f00af6929.scope: Deactivated successfully. Feb 20 03:05:00 localhost podman[62867]: 2026-02-20 08:05:00.702822114 +0000 UTC m=+0.167380341 container start e882aec8ac81713226433fb446c87955abd8068bc0df9ee27ea66e7dbdebfa2e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T23:31:49Z, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=nova_virtlogd_wrapper, name=rhosp-rhel9/openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:05:00 localhost podman[62865]: 2026-02-20 08:05:00.704119579 +0000 UTC m=+0.173662114 container died 9ed6e63dfefafc84ffc96905c6aabac3c27fb1a354f81c2521ed930f00af6929 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step3, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_statedir_owner, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1) Feb 20 03:05:00 localhost podman[62900]: 2026-02-20 08:05:00.710998359 +0000 UTC m=+0.152052288 container create 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2026-01-12T22:10:09Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, name=rhosp-rhel9/openstack-rsyslog, managed_by=tripleo_ansible, architecture=x86_64, release=1766032510, io.buildah.version=1.41.5, container_name=rsyslog, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13) Feb 20 03:05:00 localhost systemd[1]: Started libpod-conmon-64603627f45c0b5ecc8d6ff4d79eef5250527e1edfa739afb073a3d30995aedf.scope. Feb 20 03:05:00 localhost podman[62900]: 2026-02-20 08:05:00.611052446 +0000 UTC m=+0.052106365 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 20 03:05:00 localhost python3[62706]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ca9e756af36a4b8ed088db0b68d5c381 --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:00 localhost systemd[1]: Started libcrun container. Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d999c3809e8040a95b8f0394469110e9b759943810a053ce806766b4a40d89/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost systemd[1]: Started libpod-conmon-7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451.scope. Feb 20 03:05:00 localhost podman[62884]: 2026-02-20 08:05:00.735894155 +0000 UTC m=+0.189685966 container init 64603627f45c0b5ecc8d6ff4d79eef5250527e1edfa739afb073a3d30995aedf (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, version=17.1.13, release=1766032510, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, container_name=ceilometer_init_log, build-date=2026-01-12T23:07:30Z) Feb 20 03:05:00 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Feb 20 03:05:00 localhost podman[62884]: 2026-02-20 08:05:00.74298841 +0000 UTC m=+0.196780231 container start 64603627f45c0b5ecc8d6ff4d79eef5250527e1edfa739afb073a3d30995aedf (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5) Feb 20 03:05:00 localhost python3[62706]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer Feb 20 03:05:00 localhost systemd[1]: libpod-64603627f45c0b5ecc8d6ff4d79eef5250527e1edfa739afb073a3d30995aedf.scope: Deactivated successfully. Feb 20 03:05:00 localhost systemd[1]: Started libcrun container. Feb 20 03:05:00 localhost systemd[1]: Created slice User Slice of UID 0. Feb 20 03:05:00 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost podman[62900]: 2026-02-20 08:05:00.781333696 +0000 UTC m=+0.222387615 container init 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:09Z, container_name=rsyslog, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-rsyslog, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., url=https://www.redhat.com) Feb 20 03:05:00 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Feb 20 03:05:00 localhost podman[62891]: 2026-02-20 08:05:00.69253315 +0000 UTC m=+0.139601646 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 20 03:05:00 localhost systemd[1]: Starting User Manager for UID 0... Feb 20 03:05:00 localhost podman[62980]: 2026-02-20 08:05:00.799660562 +0000 UTC m=+0.041718081 container died 64603627f45c0b5ecc8d6ff4d79eef5250527e1edfa739afb073a3d30995aedf (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_init_log, vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, distribution-scope=public, build-date=2026-01-12T23:07:30Z) Feb 20 03:05:00 localhost podman[62891]: 2026-02-20 08:05:00.823119647 +0000 UTC m=+0.270188123 container create ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20260112.1) Feb 20 03:05:00 localhost podman[62900]: 2026-02-20 08:05:00.849026821 +0000 UTC m=+0.290080740 container start 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2026-01-12T22:10:09Z, config_id=tripleo_step3, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=rsyslog, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-rsyslog, version=17.1.13) Feb 20 03:05:00 localhost python3[62706]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ded727e639ed8db75a0b90424d424624 --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Feb 20 03:05:00 localhost systemd[1]: libpod-7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451.scope: Deactivated successfully. Feb 20 03:05:00 localhost podman[62948]: 2026-02-20 08:05:00.899419569 +0000 UTC m=+0.185001537 container cleanup 9ed6e63dfefafc84ffc96905c6aabac3c27fb1a354f81c2521ed930f00af6929 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_statedir_owner, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:05:00 localhost systemd[1]: Started libpod-conmon-ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.scope. Feb 20 03:05:00 localhost systemd[1]: libpod-conmon-9ed6e63dfefafc84ffc96905c6aabac3c27fb1a354f81c2521ed930f00af6929.scope: Deactivated successfully. Feb 20 03:05:00 localhost python3[62706]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1771573229 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py Feb 20 03:05:00 localhost systemd[1]: Started libcrun container. Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1cf061bfb4d6aba3459a5aa3c06a6d132a0f8d24850ac6b70fc836ee5031ed8/merged/scripts supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1cf061bfb4d6aba3459a5aa3c06a6d132a0f8d24850ac6b70fc836ee5031ed8/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:00 localhost podman[63039]: 2026-02-20 08:05:00.927622215 +0000 UTC m=+0.054183513 container died 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2026-01-12T22:10:09Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, url=https://www.redhat.com, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-rsyslog, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, container_name=rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, tcib_managed=true, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog) Feb 20 03:05:00 localhost systemd[63001]: Queued start job for default target Main User Target. Feb 20 03:05:00 localhost systemd[63001]: Created slice User Application Slice. Feb 20 03:05:00 localhost systemd[63001]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Feb 20 03:05:00 localhost systemd[63001]: Started Daily Cleanup of User's Temporary Directories. Feb 20 03:05:00 localhost systemd[63001]: Reached target Paths. Feb 20 03:05:00 localhost systemd[63001]: Reached target Timers. Feb 20 03:05:00 localhost systemd[63001]: Starting D-Bus User Message Bus Socket... Feb 20 03:05:00 localhost systemd[63001]: Starting Create User's Volatile Files and Directories... Feb 20 03:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:05:00 localhost systemd[63001]: Listening on D-Bus User Message Bus Socket. Feb 20 03:05:00 localhost systemd[63001]: Reached target Sockets. Feb 20 03:05:00 localhost podman[62891]: 2026-02-20 08:05:00.952544692 +0000 UTC m=+0.399613258 container init ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, version=17.1.13, build-date=2026-01-12T22:10:15Z, release=1766032510, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5) Feb 20 03:05:00 localhost systemd[63001]: Finished Create User's Volatile Files and Directories. Feb 20 03:05:00 localhost systemd[63001]: Reached target Basic System. Feb 20 03:05:00 localhost systemd[63001]: Reached target Main User Target. Feb 20 03:05:00 localhost systemd[63001]: Startup finished in 127ms. Feb 20 03:05:00 localhost systemd[1]: Started User Manager for UID 0. Feb 20 03:05:00 localhost systemd[1]: Started Session c1 of User root. Feb 20 03:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:05:00 localhost podman[62891]: 2026-02-20 08:05:00.986251 +0000 UTC m=+0.433319476 container start ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, container_name=collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, version=17.1.13, distribution-scope=public) Feb 20 03:05:00 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Feb 20 03:05:00 localhost python3[62706]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=da9a0dc7b40588672419e3ce10063e21 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Feb 20 03:05:00 localhost systemd[1]: Started Session c2 of User root. Feb 20 03:05:00 localhost podman[63047]: 2026-02-20 08:05:00.995429273 +0000 UTC m=+0.118165076 container cleanup 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:09Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, batch=17.1_20260112.1, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container) Feb 20 03:05:00 localhost systemd[1]: libpod-conmon-7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451.scope: Deactivated successfully. Feb 20 03:05:01 localhost podman[62981]: 2026-02-20 08:05:01.03778396 +0000 UTC m=+0.279002266 container cleanup 64603627f45c0b5ecc8d6ff4d79eef5250527e1edfa739afb073a3d30995aedf (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, container_name=ceilometer_init_log, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step3, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:05:01 localhost systemd[1]: libpod-conmon-64603627f45c0b5ecc8d6ff4d79eef5250527e1edfa739afb073a3d30995aedf.scope: Deactivated successfully. Feb 20 03:05:01 localhost systemd[1]: session-c1.scope: Deactivated successfully. Feb 20 03:05:01 localhost systemd[1]: session-c2.scope: Deactivated successfully. Feb 20 03:05:01 localhost podman[63097]: 2026-02-20 08:05:01.090147822 +0000 UTC m=+0.091820771 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, release=1766032510, container_name=collectd, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.5, batch=17.1_20260112.1) Feb 20 03:05:01 localhost podman[63097]: 2026-02-20 08:05:01.09916865 +0000 UTC m=+0.100841559 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5) Feb 20 03:05:01 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:05:01 localhost podman[63243]: 2026-02-20 08:05:01.41284755 +0000 UTC m=+0.084160059 container create 7dd5f4be3e5e2569a3059350d7d863334f2cc9c53a21705eac4bd6b527e94424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, org.opencontainers.image.created=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, distribution-scope=public, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:31:49Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 20 03:05:01 localhost podman[63256]: 2026-02-20 08:05:01.450997671 +0000 UTC m=+0.086152514 container create b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, release=1766032510, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, version=17.1.13, config_id=tripleo_step3, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, container_name=nova_virtsecretd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:05:01 localhost podman[63243]: 2026-02-20 08:05:01.374436512 +0000 UTC m=+0.045749101 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:01 localhost systemd[1]: Started libpod-conmon-b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24.scope. Feb 20 03:05:01 localhost systemd[1]: Started libpod-conmon-7dd5f4be3e5e2569a3059350d7d863334f2cc9c53a21705eac4bd6b527e94424.scope. Feb 20 03:05:01 localhost podman[63256]: 2026-02-20 08:05:01.415984137 +0000 UTC m=+0.051138970 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:01 localhost systemd[1]: Started libcrun container. Feb 20 03:05:01 localhost systemd[1]: Started libcrun container. Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6cde03305078609256ba73229e013215e0b6b3bab4afbfd99df235fae8cd56/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6cde03305078609256ba73229e013215e0b6b3bab4afbfd99df235fae8cd56/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6cde03305078609256ba73229e013215e0b6b3bab4afbfd99df235fae8cd56/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6cde03305078609256ba73229e013215e0b6b3bab4afbfd99df235fae8cd56/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc0bde398443eb2e66321c13dbbf9e41b2982bc548dca05f2f16736d23fdffa/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc0bde398443eb2e66321c13dbbf9e41b2982bc548dca05f2f16736d23fdffa/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc0bde398443eb2e66321c13dbbf9e41b2982bc548dca05f2f16736d23fdffa/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc0bde398443eb2e66321c13dbbf9e41b2982bc548dca05f2f16736d23fdffa/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc0bde398443eb2e66321c13dbbf9e41b2982bc548dca05f2f16736d23fdffa/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc0bde398443eb2e66321c13dbbf9e41b2982bc548dca05f2f16736d23fdffa/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost podman[63243]: 2026-02-20 08:05:01.529105472 +0000 UTC m=+0.200417951 container init 7dd5f4be3e5e2569a3059350d7d863334f2cc9c53a21705eac4bd6b527e94424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-type=git, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T23:31:49Z, batch=17.1_20260112.1, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc0bde398443eb2e66321c13dbbf9e41b2982bc548dca05f2f16736d23fdffa/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost podman[63256]: 2026-02-20 08:05:01.534197593 +0000 UTC m=+0.169352406 container init b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, version=17.1.13, build-date=2026-01-12T23:31:49Z, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:31:49Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtsecretd, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:05:01 localhost podman[63243]: 2026-02-20 08:05:01.536112825 +0000 UTC m=+0.207425314 container start 7dd5f4be3e5e2569a3059350d7d863334f2cc9c53a21705eac4bd6b527e94424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, name=rhosp-rhel9/openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:31:49Z) Feb 20 03:05:01 localhost podman[63256]: 2026-02-20 08:05:01.546683916 +0000 UTC m=+0.181838749 container start b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, version=17.1.13, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, container_name=nova_virtsecretd, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:31:49Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 20 03:05:01 localhost python3[62706]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ca9e756af36a4b8ed088db0b68d5c381 --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:01 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Feb 20 03:05:01 localhost systemd[1]: Started Session c3 of User root. Feb 20 03:05:01 localhost systemd[1]: var-lib-containers-storage-overlay-9cdfebf73fa27663ea59e76077217565890d2529b0a0ce1451baa620ea47f0d4-merged.mount: Deactivated successfully. Feb 20 03:05:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ed6e63dfefafc84ffc96905c6aabac3c27fb1a354f81c2521ed930f00af6929-userdata-shm.mount: Deactivated successfully. Feb 20 03:05:01 localhost systemd[1]: session-c3.scope: Deactivated successfully. Feb 20 03:05:01 localhost podman[63403]: 2026-02-20 08:05:01.927636599 +0000 UTC m=+0.063710236 container create 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:05:01 localhost systemd[1]: Started libpod-conmon-47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.scope. Feb 20 03:05:01 localhost podman[63412]: 2026-02-20 08:05:01.965239465 +0000 UTC m=+0.086062402 container create 9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vcs-type=git, version=17.1.13, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, config_id=tripleo_step3, container_name=nova_virtnodedevd, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510) Feb 20 03:05:01 localhost systemd[1]: Started libcrun container. Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87199cb72dc7cf4a94d875d451b802efef1cfd046c063c517c6aeb21861ed940/merged/etc/target supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87199cb72dc7cf4a94d875d451b802efef1cfd046c063c517c6aeb21861ed940/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:01 localhost systemd[1]: Started libpod-conmon-9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69.scope. Feb 20 03:05:01 localhost podman[63403]: 2026-02-20 08:05:01.898977929 +0000 UTC m=+0.035051566 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 20 03:05:02 localhost systemd[1]: Started libcrun container. Feb 20 03:05:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:05:02 localhost podman[63403]: 2026-02-20 08:05:02.001062301 +0000 UTC m=+0.137135938 container init 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e6d93ebb3c8cd5a1b45cdafd934ae5970ce3fd4a24326de36bd3ca078a4ea52/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e6d93ebb3c8cd5a1b45cdafd934ae5970ce3fd4a24326de36bd3ca078a4ea52/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e6d93ebb3c8cd5a1b45cdafd934ae5970ce3fd4a24326de36bd3ca078a4ea52/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e6d93ebb3c8cd5a1b45cdafd934ae5970ce3fd4a24326de36bd3ca078a4ea52/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e6d93ebb3c8cd5a1b45cdafd934ae5970ce3fd4a24326de36bd3ca078a4ea52/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e6d93ebb3c8cd5a1b45cdafd934ae5970ce3fd4a24326de36bd3ca078a4ea52/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e6d93ebb3c8cd5a1b45cdafd934ae5970ce3fd4a24326de36bd3ca078a4ea52/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost podman[63412]: 2026-02-20 08:05:02.010780059 +0000 UTC m=+0.131602996 container init 9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, container_name=nova_virtnodedevd, config_id=tripleo_step3, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, build-date=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team) Feb 20 03:05:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:05:02 localhost podman[63403]: 2026-02-20 08:05:02.016505057 +0000 UTC m=+0.152578694 container start 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.13, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, vcs-type=git, tcib_managed=true, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3) Feb 20 03:05:02 localhost python3[62706]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=df79bec7915db2c2cb15f0a47bf8984d --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Feb 20 03:05:02 localhost podman[63412]: 2026-02-20 08:05:01.920604445 +0000 UTC m=+0.041427392 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:02 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Feb 20 03:05:02 localhost systemd[1]: Started Session c4 of User root. Feb 20 03:05:02 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Feb 20 03:05:02 localhost systemd[1]: Started Session c5 of User root. Feb 20 03:05:02 localhost systemd[1]: session-c4.scope: Deactivated successfully. Feb 20 03:05:02 localhost systemd[1]: session-c5.scope: Deactivated successfully. Feb 20 03:05:02 localhost podman[63412]: 2026-02-20 08:05:02.121083447 +0000 UTC m=+0.241906404 container start 9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, release=1766032510, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, container_name=nova_virtnodedevd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.) Feb 20 03:05:02 localhost kernel: Loading iSCSI transport class v2.0-870. Feb 20 03:05:02 localhost python3[62706]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ca9e756af36a4b8ed088db0b68d5c381 --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:02 localhost podman[63445]: 2026-02-20 08:05:02.306545845 +0000 UTC m=+0.281023731 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=iscsid, config_id=tripleo_step3, release=1766032510, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z) Feb 20 03:05:02 localhost podman[63445]: 2026-02-20 08:05:02.320695764 +0000 UTC m=+0.295173610 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, container_name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Feb 20 03:05:02 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:05:02 localhost podman[63580]: 2026-02-20 08:05:02.723710685 +0000 UTC m=+0.086875024 container create 2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, tcib_managed=true, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, architecture=x86_64, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, container_name=nova_virtstoraged) Feb 20 03:05:02 localhost systemd[1]: Started libpod-conmon-2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad.scope. Feb 20 03:05:02 localhost podman[63580]: 2026-02-20 08:05:02.679019914 +0000 UTC m=+0.042184293 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:02 localhost systemd[1]: Started libcrun container. Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628013e23f23f7919f5bdb2ab66333f8efec1d8edc4c51b2f985707fbf7e0352/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628013e23f23f7919f5bdb2ab66333f8efec1d8edc4c51b2f985707fbf7e0352/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628013e23f23f7919f5bdb2ab66333f8efec1d8edc4c51b2f985707fbf7e0352/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628013e23f23f7919f5bdb2ab66333f8efec1d8edc4c51b2f985707fbf7e0352/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628013e23f23f7919f5bdb2ab66333f8efec1d8edc4c51b2f985707fbf7e0352/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628013e23f23f7919f5bdb2ab66333f8efec1d8edc4c51b2f985707fbf7e0352/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628013e23f23f7919f5bdb2ab66333f8efec1d8edc4c51b2f985707fbf7e0352/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:02 localhost podman[63580]: 2026-02-20 08:05:02.7979717 +0000 UTC m=+0.161136009 container init 2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, architecture=x86_64, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step3, batch=17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtstoraged, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, version=17.1.13, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 20 03:05:02 localhost podman[63580]: 2026-02-20 08:05:02.810627039 +0000 UTC m=+0.173791358 container start 2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, config_id=tripleo_step3, build-date=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64) Feb 20 03:05:02 localhost python3[62706]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ca9e756af36a4b8ed088db0b68d5c381 --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:02 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Feb 20 03:05:02 localhost systemd[1]: Started Session c6 of User root. Feb 20 03:05:02 localhost systemd[1]: session-c6.scope: Deactivated successfully. Feb 20 03:05:03 localhost podman[63684]: 2026-02-20 08:05:03.210752909 +0000 UTC m=+0.057677459 container create 9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, org.opencontainers.image.created=2026-01-12T23:31:49Z, distribution-scope=public, build-date=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, tcib_managed=true, name=rhosp-rhel9/openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtqemud, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team) Feb 20 03:05:03 localhost systemd[1]: Started libpod-conmon-9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3.scope. Feb 20 03:05:03 localhost systemd[1]: Started libcrun container. Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e44651e4d71ee3774f533944e42a9d6ec5c2ed40704387b0dcc01f749328/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e44651e4d71ee3774f533944e42a9d6ec5c2ed40704387b0dcc01f749328/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e44651e4d71ee3774f533944e42a9d6ec5c2ed40704387b0dcc01f749328/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e44651e4d71ee3774f533944e42a9d6ec5c2ed40704387b0dcc01f749328/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e44651e4d71ee3774f533944e42a9d6ec5c2ed40704387b0dcc01f749328/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e44651e4d71ee3774f533944e42a9d6ec5c2ed40704387b0dcc01f749328/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e44651e4d71ee3774f533944e42a9d6ec5c2ed40704387b0dcc01f749328/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e44651e4d71ee3774f533944e42a9d6ec5c2ed40704387b0dcc01f749328/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost podman[63684]: 2026-02-20 08:05:03.270750802 +0000 UTC m=+0.117675352 container init 9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, distribution-scope=public, batch=17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_virtqemud, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true) Feb 20 03:05:03 localhost podman[63684]: 2026-02-20 08:05:03.279203975 +0000 UTC m=+0.126128535 container start 9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, architecture=x86_64, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtqemud) Feb 20 03:05:03 localhost podman[63684]: 2026-02-20 08:05:03.185322089 +0000 UTC m=+0.032246639 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:03 localhost python3[62706]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ca9e756af36a4b8ed088db0b68d5c381 --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:03 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Feb 20 03:05:03 localhost systemd[1]: Started Session c7 of User root. Feb 20 03:05:03 localhost systemd[1]: session-c7.scope: Deactivated successfully. Feb 20 03:05:03 localhost podman[63788]: 2026-02-20 08:05:03.722888235 +0000 UTC m=+0.082377270 container create 2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, release=1766032510, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, container_name=nova_virtproxyd, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:31:49Z, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}) Feb 20 03:05:03 localhost systemd[1]: Started libpod-conmon-2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb.scope. Feb 20 03:05:03 localhost podman[63788]: 2026-02-20 08:05:03.680554449 +0000 UTC m=+0.040043524 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:03 localhost systemd[1]: Started libcrun container. Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbf2ebd7b526307bfea390cad44d9007884856a39422ce430dc5caa1d4f7547/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbf2ebd7b526307bfea390cad44d9007884856a39422ce430dc5caa1d4f7547/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbf2ebd7b526307bfea390cad44d9007884856a39422ce430dc5caa1d4f7547/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbf2ebd7b526307bfea390cad44d9007884856a39422ce430dc5caa1d4f7547/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbf2ebd7b526307bfea390cad44d9007884856a39422ce430dc5caa1d4f7547/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbf2ebd7b526307bfea390cad44d9007884856a39422ce430dc5caa1d4f7547/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbf2ebd7b526307bfea390cad44d9007884856a39422ce430dc5caa1d4f7547/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:03 localhost podman[63788]: 2026-02-20 08:05:03.795815904 +0000 UTC m=+0.155304939 container init 2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, container_name=nova_virtproxyd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:31:49Z, config_id=tripleo_step3, managed_by=tripleo_ansible, release=1766032510, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, build-date=2026-01-12T23:31:49Z, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:05:03 localhost podman[63788]: 2026-02-20 08:05:03.805920002 +0000 UTC m=+0.165409037 container start 2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, build-date=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-libvirt, container_name=nova_virtproxyd) Feb 20 03:05:03 localhost python3[62706]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ca9e756af36a4b8ed088db0b68d5c381 --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:05:03 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Feb 20 03:05:03 localhost systemd[1]: Started Session c8 of User root. Feb 20 03:05:03 localhost systemd[1]: session-c8.scope: Deactivated successfully. Feb 20 03:05:04 localhost python3[63870]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:04 localhost python3[63886]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:04 localhost python3[63902]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:05 localhost python3[63918]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:05 localhost python3[63934]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:05 localhost python3[63950]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:05 localhost python3[63966]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:06 localhost python3[63982]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:06 localhost python3[63998]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:06 localhost python3[64014]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:05:06 localhost sshd[64031]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:05:06 localhost python3[64030]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:05:07 localhost python3[64048]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:05:07 localhost python3[64064]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:05:07 localhost python3[64080]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:05:07 localhost sshd[64081]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:05:07 localhost python3[64097]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:05:08 localhost python3[64113]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:05:08 localhost python3[64130]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:05:08 localhost python3[64146]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:05:09 localhost python3[64207]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.6502068-100710-53901332722535/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:09 localhost python3[64236]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.6502068-100710-53901332722535/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:10 localhost python3[64265]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.6502068-100710-53901332722535/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:10 localhost python3[64294]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.6502068-100710-53901332722535/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:11 localhost python3[64323]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.6502068-100710-53901332722535/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:11 localhost python3[64352]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.6502068-100710-53901332722535/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:12 localhost python3[64381]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.6502068-100710-53901332722535/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:12 localhost python3[64410]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.6502068-100710-53901332722535/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:13 localhost python3[64439]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.6502068-100710-53901332722535/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:13 localhost python3[64455]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 03:05:13 localhost systemd[1]: Reloading. Feb 20 03:05:13 localhost systemd-sysv-generator[64484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:13 localhost systemd-rc-local-generator[64478]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:14 localhost systemd[1]: Stopping User Manager for UID 0... Feb 20 03:05:14 localhost systemd[63001]: Activating special unit Exit the Session... Feb 20 03:05:14 localhost systemd[63001]: Stopped target Main User Target. Feb 20 03:05:14 localhost systemd[63001]: Stopped target Basic System. Feb 20 03:05:14 localhost systemd[63001]: Stopped target Paths. Feb 20 03:05:14 localhost systemd[63001]: Stopped target Sockets. Feb 20 03:05:14 localhost systemd[63001]: Stopped target Timers. Feb 20 03:05:14 localhost systemd[63001]: Stopped Daily Cleanup of User's Temporary Directories. Feb 20 03:05:14 localhost systemd[63001]: Closed D-Bus User Message Bus Socket. Feb 20 03:05:14 localhost systemd[63001]: Stopped Create User's Volatile Files and Directories. Feb 20 03:05:14 localhost systemd[63001]: Removed slice User Application Slice. Feb 20 03:05:14 localhost systemd[63001]: Reached target Shutdown. Feb 20 03:05:14 localhost systemd[63001]: Finished Exit the Session. Feb 20 03:05:14 localhost systemd[63001]: Reached target Exit the Session. Feb 20 03:05:14 localhost systemd[1]: user@0.service: Deactivated successfully. Feb 20 03:05:14 localhost systemd[1]: Stopped User Manager for UID 0. Feb 20 03:05:14 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Feb 20 03:05:14 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Feb 20 03:05:14 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Feb 20 03:05:14 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Feb 20 03:05:14 localhost systemd[1]: Removed slice User Slice of UID 0. Feb 20 03:05:14 localhost sshd[64492]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:05:14 localhost python3[64509]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:05:14 localhost systemd[1]: Reloading. Feb 20 03:05:14 localhost systemd-rc-local-generator[64536]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:14 localhost systemd-sysv-generator[64542]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:15 localhost systemd[1]: Starting collectd container... Feb 20 03:05:15 localhost systemd[1]: Started collectd container. Feb 20 03:05:15 localhost python3[64576]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:05:15 localhost systemd[1]: Reloading. Feb 20 03:05:15 localhost systemd-sysv-generator[64603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:15 localhost systemd-rc-local-generator[64599]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:16 localhost systemd[1]: Starting iscsid container... Feb 20 03:05:16 localhost systemd[1]: Started iscsid container. Feb 20 03:05:16 localhost python3[64642]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:05:17 localhost sshd[64644]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:05:17 localhost systemd[1]: Reloading. Feb 20 03:05:18 localhost systemd-rc-local-generator[64668]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:18 localhost systemd-sysv-generator[64674]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:18 localhost systemd[1]: Starting nova_virtlogd_wrapper container... Feb 20 03:05:18 localhost systemd[1]: Started nova_virtlogd_wrapper container. Feb 20 03:05:18 localhost python3[64710]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:05:18 localhost systemd[1]: Reloading. Feb 20 03:05:18 localhost systemd-rc-local-generator[64735]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:18 localhost systemd-sysv-generator[64740]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:19 localhost systemd[1]: Starting nova_virtnodedevd container... Feb 20 03:05:19 localhost tripleo-start-podman-container[64750]: Creating additional drop-in dependency for "nova_virtnodedevd" (9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69) Feb 20 03:05:19 localhost systemd[1]: Reloading. Feb 20 03:05:19 localhost systemd-rc-local-generator[64804]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:19 localhost systemd-sysv-generator[64808]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:19 localhost systemd[1]: Started nova_virtnodedevd container. Feb 20 03:05:19 localhost systemd[1]: Starting dnf makecache... Feb 20 03:05:19 localhost dnf[64817]: Updating Subscription Management repositories. Feb 20 03:05:20 localhost python3[64833]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:05:20 localhost systemd[1]: Reloading. Feb 20 03:05:20 localhost systemd-sysv-generator[64861]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:20 localhost systemd-rc-local-generator[64858]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:20 localhost systemd[1]: Starting nova_virtproxyd container... Feb 20 03:05:20 localhost tripleo-start-podman-container[64873]: Creating additional drop-in dependency for "nova_virtproxyd" (2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb) Feb 20 03:05:20 localhost systemd[1]: Reloading. Feb 20 03:05:20 localhost systemd-sysv-generator[64932]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:20 localhost systemd-rc-local-generator[64929]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:21 localhost systemd[1]: Started nova_virtproxyd container. Feb 20 03:05:21 localhost dnf[64817]: Metadata cache refreshed recently. Feb 20 03:05:21 localhost python3[64957]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:05:21 localhost systemd[1]: Reloading. Feb 20 03:05:21 localhost systemd-rc-local-generator[64980]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:21 localhost systemd-sysv-generator[64983]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:22 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Feb 20 03:05:22 localhost systemd[1]: Finished dnf makecache. Feb 20 03:05:22 localhost systemd[1]: dnf-makecache.service: Consumed 2.187s CPU time. Feb 20 03:05:22 localhost systemd[1]: Starting nova_virtqemud container... Feb 20 03:05:22 localhost tripleo-start-podman-container[64997]: Creating additional drop-in dependency for "nova_virtqemud" (9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3) Feb 20 03:05:22 localhost systemd[1]: Reloading. Feb 20 03:05:22 localhost systemd-rc-local-generator[65052]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:22 localhost systemd-sysv-generator[65055]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:22 localhost systemd[1]: Started nova_virtqemud container. Feb 20 03:05:23 localhost python3[65082]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:05:23 localhost systemd[1]: Reloading. Feb 20 03:05:23 localhost systemd-rc-local-generator[65109]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:23 localhost systemd-sysv-generator[65114]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:23 localhost systemd[1]: Starting nova_virtsecretd container... Feb 20 03:05:23 localhost tripleo-start-podman-container[65122]: Creating additional drop-in dependency for "nova_virtsecretd" (b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24) Feb 20 03:05:23 localhost systemd[1]: Reloading. Feb 20 03:05:23 localhost systemd-rc-local-generator[65174]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:23 localhost systemd-sysv-generator[65179]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:24 localhost systemd[1]: Started nova_virtsecretd container. Feb 20 03:05:24 localhost python3[65204]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:05:24 localhost systemd[1]: Reloading. Feb 20 03:05:24 localhost systemd-rc-local-generator[65230]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:24 localhost systemd-sysv-generator[65234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:24 localhost systemd[1]: Starting nova_virtstoraged container... Feb 20 03:05:25 localhost tripleo-start-podman-container[65244]: Creating additional drop-in dependency for "nova_virtstoraged" (2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad) Feb 20 03:05:25 localhost systemd[1]: Reloading. Feb 20 03:05:25 localhost systemd-sysv-generator[65305]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:25 localhost systemd-rc-local-generator[65300]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:25 localhost systemd[1]: Started nova_virtstoraged container. Feb 20 03:05:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:05:25 localhost podman[65311]: 2026-02-20 08:05:25.589900394 +0000 UTC m=+0.090102957 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.13, io.buildah.version=1.41.5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container) Feb 20 03:05:25 localhost podman[65311]: 2026-02-20 08:05:25.82588785 +0000 UTC m=+0.326090443 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.13, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container) Feb 20 03:05:25 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:05:26 localhost python3[65352]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:05:26 localhost systemd[1]: Reloading. Feb 20 03:05:26 localhost systemd-rc-local-generator[65377]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:05:26 localhost systemd-sysv-generator[65380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:05:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:05:26 localhost systemd[1]: Starting rsyslog container... Feb 20 03:05:26 localhost systemd[1]: Started libcrun container. Feb 20 03:05:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:26 localhost podman[65392]: 2026-02-20 08:05:26.53476134 +0000 UTC m=+0.108881717 container init 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, url=https://www.redhat.com, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp-rhel9/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, build-date=2026-01-12T22:10:09Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:09Z, release=1766032510, io.openshift.expose-services=, tcib_managed=true) Feb 20 03:05:26 localhost podman[65392]: 2026-02-20 08:05:26.542671496 +0000 UTC m=+0.116791873 container start 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-rsyslog, version=17.1.13, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, vcs-type=git, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2026-01-12T22:10:09Z, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:09Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:05:26 localhost podman[65392]: rsyslog Feb 20 03:05:26 localhost systemd[1]: Started rsyslog container. Feb 20 03:05:26 localhost systemd[1]: libpod-7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451.scope: Deactivated successfully. Feb 20 03:05:26 localhost podman[65422]: 2026-02-20 08:05:26.682471306 +0000 UTC m=+0.048373820 container died 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.created=2026-01-12T22:10:09Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.openshift.expose-services=, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-rsyslog, container_name=rsyslog, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Feb 20 03:05:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451-userdata-shm.mount: Deactivated successfully. Feb 20 03:05:26 localhost podman[65422]: 2026-02-20 08:05:26.709282654 +0000 UTC m=+0.075185138 container cleanup 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:09Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:09Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Feb 20 03:05:26 localhost systemd[1]: var-lib-containers-storage-overlay-d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341-merged.mount: Deactivated successfully. Feb 20 03:05:26 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:05:26 localhost podman[65441]: 2026-02-20 08:05:26.789166395 +0000 UTC m=+0.057368066 container cleanup 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_step3, vcs-type=git, architecture=x86_64, version=17.1.13, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:09Z, container_name=rsyslog, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Feb 20 03:05:26 localhost podman[65441]: rsyslog Feb 20 03:05:26 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 20 03:05:26 localhost python3[65469]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:27 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1. Feb 20 03:05:27 localhost systemd[1]: Stopped rsyslog container. Feb 20 03:05:27 localhost systemd[1]: Starting rsyslog container... Feb 20 03:05:27 localhost systemd[1]: Started libcrun container. Feb 20 03:05:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:27 localhost podman[65470]: 2026-02-20 08:05:27.238909206 +0000 UTC m=+0.130086498 container init 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-rsyslog, architecture=x86_64, config_id=tripleo_step3, version=17.1.13, release=1766032510, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, build-date=2026-01-12T22:10:09Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 rsyslog) Feb 20 03:05:27 localhost podman[65470]: 2026-02-20 08:05:27.247809989 +0000 UTC m=+0.138987281 container start 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, name=rhosp-rhel9/openstack-rsyslog, build-date=2026-01-12T22:10:09Z, version=17.1.13, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.component=openstack-rsyslog-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Feb 20 03:05:27 localhost podman[65470]: rsyslog Feb 20 03:05:27 localhost systemd[1]: Started rsyslog container. Feb 20 03:05:27 localhost systemd[1]: libpod-7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451.scope: Deactivated successfully. Feb 20 03:05:27 localhost podman[65493]: 2026-02-20 08:05:27.399816197 +0000 UTC m=+0.056821501 container died 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, version=17.1.13, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=rsyslog, name=rhosp-rhel9/openstack-rsyslog, build-date=2026-01-12T22:10:09Z, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-rsyslog-container) Feb 20 03:05:27 localhost podman[65493]: 2026-02-20 08:05:27.422457406 +0000 UTC m=+0.079462730 container cleanup 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-type=git, distribution-scope=public, container_name=rsyslog, name=rhosp-rhel9/openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:09Z, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, io.buildah.version=1.41.5, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:05:27 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:05:27 localhost podman[65507]: 2026-02-20 08:05:27.5120866 +0000 UTC m=+0.061709627 container cleanup 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, name=rhosp-rhel9/openstack-rsyslog, release=1766032510, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:09Z, build-date=2026-01-12T22:10:09Z) Feb 20 03:05:27 localhost podman[65507]: rsyslog Feb 20 03:05:27 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 20 03:05:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451-userdata-shm.mount: Deactivated successfully. Feb 20 03:05:27 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2. Feb 20 03:05:27 localhost systemd[1]: Stopped rsyslog container. Feb 20 03:05:27 localhost systemd[1]: Starting rsyslog container... Feb 20 03:05:27 localhost systemd[1]: tmp-crun.jFVxwp.mount: Deactivated successfully. Feb 20 03:05:27 localhost systemd[1]: Started libcrun container. Feb 20 03:05:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:27 localhost podman[65565]: 2026-02-20 08:05:27.779600516 +0000 UTC m=+0.123765773 container init 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T22:10:09Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, name=rhosp-rhel9/openstack-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, container_name=rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, version=17.1.13, release=1766032510, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Feb 20 03:05:27 localhost podman[65565]: 2026-02-20 08:05:27.787054541 +0000 UTC m=+0.131219798 container start 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2026-01-12T22:10:09Z, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:09Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp-rhel9/openstack-rsyslog, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:05:27 localhost podman[65565]: rsyslog Feb 20 03:05:27 localhost systemd[1]: Started rsyslog container. Feb 20 03:05:27 localhost systemd[1]: libpod-7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451.scope: Deactivated successfully. Feb 20 03:05:27 localhost podman[65588]: 2026-02-20 08:05:27.910102455 +0000 UTC m=+0.041908612 container died 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:09Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, name=rhosp-rhel9/openstack-rsyslog, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, container_name=rsyslog, distribution-scope=public, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2026-01-12T22:10:09Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team) Feb 20 03:05:27 localhost podman[65588]: 2026-02-20 08:05:27.932798516 +0000 UTC m=+0.064604673 container cleanup 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:09Z, io.openshift.expose-services=, container_name=rsyslog, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:09Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_id=tripleo_step3, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-rsyslog, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:05:27 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:05:28 localhost podman[65615]: 2026-02-20 08:05:28.019566266 +0000 UTC m=+0.059674075 container cleanup 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, architecture=x86_64, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:09Z, batch=17.1_20260112.1, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-rsyslog, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2026-01-12T22:10:09Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, container_name=rsyslog, version=17.1.13) Feb 20 03:05:28 localhost podman[65615]: rsyslog Feb 20 03:05:28 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 20 03:05:28 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3. Feb 20 03:05:28 localhost systemd[1]: Stopped rsyslog container. Feb 20 03:05:28 localhost systemd[1]: Starting rsyslog container... Feb 20 03:05:28 localhost systemd[1]: Started libcrun container. Feb 20 03:05:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:28 localhost podman[65670]: 2026-02-20 08:05:28.472253965 +0000 UTC m=+0.112154472 container init 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, name=rhosp-rhel9/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:09Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, version=17.1.13, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.component=openstack-rsyslog-container, build-date=2026-01-12T22:10:09Z, io.buildah.version=1.41.5, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Feb 20 03:05:28 localhost podman[65670]: 2026-02-20 08:05:28.482047799 +0000 UTC m=+0.121948306 container start 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-rsyslog, build-date=2026-01-12T22:10:09Z, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:05:28 localhost podman[65670]: rsyslog Feb 20 03:05:28 localhost systemd[1]: Started rsyslog container. Feb 20 03:05:28 localhost systemd[1]: libpod-7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451.scope: Deactivated successfully. Feb 20 03:05:28 localhost podman[65709]: 2026-02-20 08:05:28.642049006 +0000 UTC m=+0.057870508 container died 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp-rhel9/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-type=git, container_name=rsyslog, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:05:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451-userdata-shm.mount: Deactivated successfully. Feb 20 03:05:28 localhost systemd[1]: var-lib-containers-storage-overlay-d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341-merged.mount: Deactivated successfully. Feb 20 03:05:28 localhost python3[65704]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005625202 step=3 update_config_hash_only=False Feb 20 03:05:28 localhost podman[65709]: 2026-02-20 08:05:28.669705306 +0000 UTC m=+0.085526768 container cleanup 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:09Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1766032510, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, managed_by=tripleo_ansible, version=17.1.13, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog) Feb 20 03:05:28 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:05:28 localhost podman[65723]: 2026-02-20 08:05:28.750944032 +0000 UTC m=+0.053729100 container cleanup 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, name=rhosp-rhel9/openstack-rsyslog, distribution-scope=public, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, container_name=rsyslog, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:09Z, io.buildah.version=1.41.5, config_id=tripleo_step3, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 rsyslog) Feb 20 03:05:28 localhost podman[65723]: rsyslog Feb 20 03:05:28 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 20 03:05:29 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4. Feb 20 03:05:29 localhost systemd[1]: Stopped rsyslog container. Feb 20 03:05:29 localhost systemd[1]: Starting rsyslog container... Feb 20 03:05:29 localhost systemd[1]: Started libcrun container. Feb 20 03:05:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40286894c52bfae77db2c2efc77cdbddb170a873804d5711706b242c8ade341/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Feb 20 03:05:29 localhost podman[65752]: 2026-02-20 08:05:29.169729248 +0000 UTC m=+0.121685491 container init 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, build-date=2026-01-12T22:10:09Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:09Z) Feb 20 03:05:29 localhost podman[65752]: 2026-02-20 08:05:29.178318131 +0000 UTC m=+0.130274384 container start 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T22:10:09Z, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:05:29 localhost podman[65752]: rsyslog Feb 20 03:05:29 localhost systemd[1]: Started rsyslog container. Feb 20 03:05:29 localhost python3[65751]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:05:29 localhost systemd[1]: libpod-7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451.scope: Deactivated successfully. Feb 20 03:05:29 localhost podman[65774]: 2026-02-20 08:05:29.346145072 +0000 UTC m=+0.052653142 container died 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, io.openshift.expose-services=, container_name=rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp-rhel9/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:09Z, com.redhat.component=openstack-rsyslog-container, version=17.1.13, distribution-scope=public) Feb 20 03:05:29 localhost podman[65774]: 2026-02-20 08:05:29.371995495 +0000 UTC m=+0.078503545 container cleanup 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, release=1766032510, managed_by=tripleo_ansible, container_name=rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.13, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:09Z) Feb 20 03:05:29 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:05:29 localhost podman[65801]: 2026-02-20 08:05:29.461772793 +0000 UTC m=+0.059302235 container cleanup 7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ded727e639ed8db75a0b90424d424624'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.created=2026-01-12T22:10:09Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.5, tcib_managed=true, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:05:29 localhost podman[65801]: rsyslog Feb 20 03:05:29 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 20 03:05:29 localhost python3[65802]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 20 03:05:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7402e0a6ee328893eafa8b3ed7230f3b64758a668878ed2570df81e90972a451-userdata-shm.mount: Deactivated successfully. Feb 20 03:05:29 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5. Feb 20 03:05:29 localhost systemd[1]: Stopped rsyslog container. Feb 20 03:05:29 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly. Feb 20 03:05:29 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Feb 20 03:05:29 localhost systemd[1]: Failed to start rsyslog container. Feb 20 03:05:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:05:31 localhost podman[65814]: 2026-02-20 08:05:31.493257165 +0000 UTC m=+0.084910802 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, vcs-type=git, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64) Feb 20 03:05:31 localhost podman[65814]: 2026-02-20 08:05:31.502649889 +0000 UTC m=+0.094303516 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, release=1766032510, build-date=2026-01-12T22:10:15Z, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, architecture=x86_64, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:05:31 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:05:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:05:32 localhost podman[65835]: 2026-02-20 08:05:32.43965219 +0000 UTC m=+0.075731353 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, release=1766032510, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, container_name=iscsid, version=17.1.13, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 20 03:05:32 localhost podman[65835]: 2026-02-20 08:05:32.451705533 +0000 UTC m=+0.087784706 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, release=1766032510, url=https://www.redhat.com, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:05:32 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:05:38 localhost sshd[65854]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:05:44 localhost sshd[65856]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:05:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:05:56 localhost systemd[1]: tmp-crun.u3emat.mount: Deactivated successfully. Feb 20 03:05:56 localhost podman[65935]: 2026-02-20 08:05:56.499546906 +0000 UTC m=+0.138775926 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_id=tripleo_step1, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, container_name=metrics_qdr, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, release=1766032510, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container) Feb 20 03:05:56 localhost podman[65935]: 2026-02-20 08:05:56.723924778 +0000 UTC m=+0.363153788 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=metrics_qdr, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_id=tripleo_step1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 03:05:56 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:06:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:06:02 localhost systemd[1]: tmp-crun.1Jpmbh.mount: Deactivated successfully. Feb 20 03:06:02 localhost podman[65964]: 2026-02-20 08:06:02.462145798 +0000 UTC m=+0.101379631 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, managed_by=tripleo_ansible, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, build-date=2026-01-12T22:10:15Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:06:02 localhost podman[65964]: 2026-02-20 08:06:02.477694143 +0000 UTC m=+0.116927976 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1766032510, io.buildah.version=1.41.5, tcib_managed=true, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:06:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:06:02 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:06:02 localhost podman[65984]: 2026-02-20 08:06:02.562208014 +0000 UTC m=+0.063756332 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=) Feb 20 03:06:02 localhost podman[65984]: 2026-02-20 08:06:02.599792212 +0000 UTC m=+0.101340490 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1766032510, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Feb 20 03:06:02 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:06:22 localhost sshd[66005]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:06:22 localhost sshd[66006]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:06:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:06:27 localhost systemd[1]: tmp-crun.t7J7mr.mount: Deactivated successfully. Feb 20 03:06:27 localhost podman[66009]: 2026-02-20 08:06:27.44897483 +0000 UTC m=+0.088134525 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, tcib_managed=true, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, version=17.1.13, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 03:06:27 localhost podman[66009]: 2026-02-20 08:06:27.654823211 +0000 UTC m=+0.293982886 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, version=17.1.13, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:06:27 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:06:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:06:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:06:33 localhost podman[66038]: 2026-02-20 08:06:33.437518247 +0000 UTC m=+0.076853822 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, release=1766032510, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, container_name=iscsid, build-date=2026-01-12T22:34:43Z, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:06:33 localhost podman[66039]: 2026-02-20 08:06:33.493739541 +0000 UTC m=+0.130101259 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.buildah.version=1.41.5, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc.) Feb 20 03:06:33 localhost podman[66039]: 2026-02-20 08:06:33.502783506 +0000 UTC m=+0.139145234 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, name=rhosp-rhel9/openstack-collectd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64) Feb 20 03:06:33 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:06:33 localhost podman[66038]: 2026-02-20 08:06:33.526865964 +0000 UTC m=+0.166201559 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1766032510, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Feb 20 03:06:33 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:06:52 localhost sshd[66157]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:06:55 localhost sshd[66159]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:06:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:06:58 localhost podman[66161]: 2026-02-20 08:06:58.442547905 +0000 UTC m=+0.080404575 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=metrics_qdr, architecture=x86_64, io.buildah.version=1.41.5, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1) Feb 20 03:06:58 localhost podman[66161]: 2026-02-20 08:06:58.640615334 +0000 UTC m=+0.278471944 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, container_name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 20 03:06:58 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:07:04 localhost podman[66191]: 2026-02-20 08:07:04.440436537 +0000 UTC m=+0.082756126 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., distribution-scope=public, container_name=iscsid, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, version=17.1.13, config_id=tripleo_step3) Feb 20 03:07:04 localhost podman[66192]: 2026-02-20 08:07:04.41942999 +0000 UTC m=+0.062646493 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.13, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:07:04 localhost podman[66191]: 2026-02-20 08:07:04.474956635 +0000 UTC m=+0.117276224 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, container_name=iscsid, release=1766032510, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, build-date=2026-01-12T22:34:43Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:07:04 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:07:04 localhost podman[66192]: 2026-02-20 08:07:04.499129285 +0000 UTC m=+0.142345838 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:07:04 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:07:27 localhost sshd[66230]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:07:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:07:29 localhost podman[66232]: 2026-02-20 08:07:29.436598705 +0000 UTC m=+0.078802733 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1766032510, vendor=Red Hat, Inc., container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 03:07:29 localhost podman[66232]: 2026-02-20 08:07:29.645855585 +0000 UTC m=+0.288059603 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, config_id=tripleo_step1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:07:29 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:07:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:07:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:07:35 localhost podman[66263]: 2026-02-20 08:07:35.427296059 +0000 UTC m=+0.063957805 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, build-date=2026-01-12T22:10:15Z) Feb 20 03:07:35 localhost podman[66263]: 2026-02-20 08:07:35.432914915 +0000 UTC m=+0.069576671 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, version=17.1.13, config_id=tripleo_step3, release=1766032510, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:07:35 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:07:35 localhost podman[66262]: 2026-02-20 08:07:35.490098235 +0000 UTC m=+0.126781402 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, distribution-scope=public, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:07:35 localhost podman[66262]: 2026-02-20 08:07:35.528836854 +0000 UTC m=+0.165520001 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible, container_name=iscsid, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3) Feb 20 03:07:35 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:07:35 localhost sshd[66299]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:08:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:08:00 localhost systemd[1]: tmp-crun.PUeQHb.mount: Deactivated successfully. Feb 20 03:08:00 localhost podman[66379]: 2026-02-20 08:08:00.446134822 +0000 UTC m=+0.087202556 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, version=17.1.13, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 03:08:00 localhost podman[66379]: 2026-02-20 08:08:00.640158111 +0000 UTC m=+0.281225885 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, config_id=tripleo_step1, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=) Feb 20 03:08:00 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:08:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:08:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:08:06 localhost systemd[1]: tmp-crun.opzXUz.mount: Deactivated successfully. Feb 20 03:08:06 localhost podman[66408]: 2026-02-20 08:08:06.438851958 +0000 UTC m=+0.080357769 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, vcs-type=git, managed_by=tripleo_ansible, container_name=iscsid, distribution-scope=public, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3) Feb 20 03:08:06 localhost podman[66408]: 2026-02-20 08:08:06.477728504 +0000 UTC m=+0.119234355 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, container_name=iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., release=1766032510, batch=17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 20 03:08:06 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:08:06 localhost podman[66409]: 2026-02-20 08:08:06.496363737 +0000 UTC m=+0.134277459 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public) Feb 20 03:08:06 localhost podman[66409]: 2026-02-20 08:08:06.505756674 +0000 UTC m=+0.143670466 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, architecture=x86_64, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:08:06 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:08:07 localhost sshd[66447]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:08:10 localhost sshd[66449]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:08:28 localhost sshd[66451]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:08:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:08:31 localhost podman[66453]: 2026-02-20 08:08:31.441373429 +0000 UTC m=+0.080499062 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, io.buildah.version=1.41.5, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 03:08:31 localhost podman[66453]: 2026-02-20 08:08:31.621104145 +0000 UTC m=+0.260229728 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, release=1766032510, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:08:31 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:08:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:08:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:08:37 localhost systemd[1]: tmp-crun.nwHaIX.mount: Deactivated successfully. Feb 20 03:08:37 localhost podman[66482]: 2026-02-20 08:08:37.43205838 +0000 UTC m=+0.075193897 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, config_id=tripleo_step3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:08:37 localhost podman[66482]: 2026-02-20 08:08:37.444849792 +0000 UTC m=+0.087985369 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 20 03:08:37 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:08:37 localhost podman[66483]: 2026-02-20 08:08:37.493121967 +0000 UTC m=+0.132672335 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, build-date=2026-01-12T22:10:15Z, distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, container_name=collectd, io.openshift.expose-services=, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:08:37 localhost podman[66483]: 2026-02-20 08:08:37.499089731 +0000 UTC m=+0.138640069 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-collectd-container, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd) Feb 20 03:08:37 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:08:48 localhost sshd[66519]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:08:49 localhost sshd[66520]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:08:52 localhost sshd[66523]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:09:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:09:02 localhost podman[66653]: 2026-02-20 08:09:02.449543304 +0000 UTC m=+0.090751703 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, version=17.1.13, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 03:09:02 localhost podman[66653]: 2026-02-20 08:09:02.670243156 +0000 UTC m=+0.311451515 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr) Feb 20 03:09:02 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:09:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:09:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:09:08 localhost podman[66682]: 2026-02-20 08:09:08.438498678 +0000 UTC m=+0.077590191 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, distribution-scope=public, version=17.1.13, container_name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 20 03:09:08 localhost podman[66682]: 2026-02-20 08:09:08.448631157 +0000 UTC m=+0.087722660 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, container_name=iscsid, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 20 03:09:08 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:09:08 localhost podman[66683]: 2026-02-20 08:09:08.492344248 +0000 UTC m=+0.130779813 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, version=17.1.13, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, io.openshift.expose-services=) Feb 20 03:09:08 localhost podman[66683]: 2026-02-20 08:09:08.501832978 +0000 UTC m=+0.140268523 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1766032510, architecture=x86_64, vcs-type=git, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:09:08 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:09:13 localhost sshd[66724]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:09:18 localhost python3[66773]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:09:18 localhost python3[66818]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574958.2347555-107657-55307293337043/source _original_basename=tmpurquenin follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:09:19 localhost python3[66880]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:09:20 localhost python3[66923]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574959.2929814-107718-40234539349147/source _original_basename=tmpfzr21y_g follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:09:20 localhost python3[66985]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:09:21 localhost python3[67028]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574960.2467377-107774-167086281488817/source _original_basename=tmpz49zk_q8 follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:09:21 localhost python3[67090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:09:22 localhost python3[67133]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574961.2804725-107817-260992363425317/source _original_basename=tmprs9a1wtw follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:09:22 localhost python3[67163]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 20 03:09:22 localhost systemd[1]: Reloading. Feb 20 03:09:22 localhost systemd-sysv-generator[67187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:09:22 localhost systemd-rc-local-generator[67181]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:09:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:09:22 localhost systemd[1]: Reloading. Feb 20 03:09:22 localhost systemd-rc-local-generator[67223]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:09:22 localhost systemd-sysv-generator[67227]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:09:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:09:23 localhost python3[67252]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:09:23 localhost systemd[1]: Reloading. Feb 20 03:09:23 localhost systemd-rc-local-generator[67277]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:09:23 localhost systemd-sysv-generator[67283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:09:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:09:24 localhost systemd[1]: Reloading. Feb 20 03:09:24 localhost systemd-rc-local-generator[67312]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:09:24 localhost systemd-sysv-generator[67318]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:09:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:09:24 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m. Feb 20 03:09:24 localhost python3[67342]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 03:09:24 localhost systemd[1]: Reloading. Feb 20 03:09:25 localhost systemd-rc-local-generator[67366]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:09:25 localhost systemd-sysv-generator[67369]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:09:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:09:25 localhost python3[67425]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:09:26 localhost python3[67468]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574965.393658-108162-277797383407415/source _original_basename=tmpscm_smam follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:09:26 localhost python3[67498]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:09:26 localhost systemd[1]: Reloading. Feb 20 03:09:26 localhost systemd-rc-local-generator[67525]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:09:26 localhost systemd-sysv-generator[67528]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:09:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:09:26 localhost systemd[1]: Reached target tripleo_nova_libvirt.target. Feb 20 03:09:27 localhost python3[67553]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:09:27 localhost sshd[67557]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:09:28 localhost ansible-async_wrapper.py[67727]: Invoked with 349055278898 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574968.3965032-108354-187615475839782/AnsiballZ_command.py _ Feb 20 03:09:28 localhost ansible-async_wrapper.py[67730]: Starting module and watcher Feb 20 03:09:28 localhost ansible-async_wrapper.py[67730]: Start watching 67731 (3600) Feb 20 03:09:28 localhost ansible-async_wrapper.py[67731]: Start module (67731) Feb 20 03:09:28 localhost ansible-async_wrapper.py[67727]: Return async_wrapper task started. Feb 20 03:09:29 localhost sshd[67752]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:09:29 localhost python3[67751]: ansible-ansible.legacy.async_status Invoked with jid=349055278898.67727 mode=status _async_dir=/tmp/.ansible_async Feb 20 03:09:32 localhost puppet-user[67738]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 03:09:32 localhost puppet-user[67738]: (file: /etc/puppet/hiera.yaml) Feb 20 03:09:32 localhost puppet-user[67738]: Warning: Undefined variable '::deploy_config_name'; Feb 20 03:09:32 localhost puppet-user[67738]: (file & line not available) Feb 20 03:09:32 localhost puppet-user[67738]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 03:09:32 localhost puppet-user[67738]: (file & line not available) Feb 20 03:09:32 localhost puppet-user[67738]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 20 03:09:32 localhost puppet-user[67738]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 20 03:09:32 localhost puppet-user[67738]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 20 03:09:32 localhost puppet-user[67738]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 20 03:09:32 localhost puppet-user[67738]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 20 03:09:32 localhost puppet-user[67738]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 20 03:09:32 localhost puppet-user[67738]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 20 03:09:32 localhost puppet-user[67738]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 20 03:09:32 localhost puppet-user[67738]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 20 03:09:32 localhost puppet-user[67738]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 20 03:09:32 localhost puppet-user[67738]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 20 03:09:32 localhost puppet-user[67738]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 20 03:09:32 localhost puppet-user[67738]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 20 03:09:32 localhost puppet-user[67738]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 20 03:09:32 localhost puppet-user[67738]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 20 03:09:32 localhost puppet-user[67738]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 20 03:09:32 localhost puppet-user[67738]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 20 03:09:32 localhost puppet-user[67738]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 20 03:09:32 localhost puppet-user[67738]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 20 03:09:32 localhost puppet-user[67738]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.21 seconds Feb 20 03:09:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:09:33 localhost podman[67871]: 2026-02-20 08:09:33.45159522 +0000 UTC m=+0.085600631 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, batch=17.1_20260112.1, url=https://www.redhat.com, release=1766032510, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:09:33 localhost podman[67871]: 2026-02-20 08:09:33.638696626 +0000 UTC m=+0.272701967 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:09:33 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:09:33 localhost ansible-async_wrapper.py[67730]: 67731 still running (3600) Feb 20 03:09:38 localhost ansible-async_wrapper.py[67730]: 67731 still running (3595) Feb 20 03:09:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:09:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:09:39 localhost systemd[1]: tmp-crun.CbXLMG.mount: Deactivated successfully. Feb 20 03:09:39 localhost podman[67983]: 2026-02-20 08:09:39.434557376 +0000 UTC m=+0.073410594 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Feb 20 03:09:39 localhost podman[67983]: 2026-02-20 08:09:39.444831834 +0000 UTC m=+0.083685072 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, release=1766032510, config_id=tripleo_step3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5) Feb 20 03:09:39 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:09:39 localhost podman[67982]: 2026-02-20 08:09:39.49534522 +0000 UTC m=+0.132781182 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true) Feb 20 03:09:39 localhost podman[67982]: 2026-02-20 08:09:39.50416996 +0000 UTC m=+0.141605882 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, vendor=Red Hat, Inc., io.openshift.expose-services=) Feb 20 03:09:39 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:09:39 localhost python3[67994]: ansible-ansible.legacy.async_status Invoked with jid=349055278898.67727 mode=status _async_dir=/tmp/.ansible_async Feb 20 03:09:40 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 03:09:40 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 03:09:40 localhost systemd[1]: Reloading. Feb 20 03:09:40 localhost systemd-sysv-generator[68118]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:09:40 localhost systemd-rc-local-generator[68114]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:09:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:09:40 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 20 03:09:41 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 03:09:41 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 03:09:41 localhost systemd[1]: run-r8763da848661491ea5772138a55a02d1.service: Deactivated successfully. Feb 20 03:09:41 localhost puppet-user[67738]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created Feb 20 03:09:41 localhost puppet-user[67738]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}a4260eae8c8daaecc8ae6ea2e59a1061cb4335c7c5fe84459f587214c8fd69f8' Feb 20 03:09:41 localhost puppet-user[67738]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd' Feb 20 03:09:41 localhost puppet-user[67738]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea' Feb 20 03:09:41 localhost puppet-user[67738]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97' Feb 20 03:09:41 localhost puppet-user[67738]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events Feb 20 03:09:43 localhost ansible-async_wrapper.py[67730]: 67731 still running (3590) Feb 20 03:09:46 localhost puppet-user[67738]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully Feb 20 03:09:47 localhost systemd[1]: Reloading. Feb 20 03:09:47 localhost systemd-sysv-generator[69150]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:09:47 localhost systemd-rc-local-generator[69147]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:09:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:09:47 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Feb 20 03:09:47 localhost snmpd[69161]: Can't find directory of RPM packages Feb 20 03:09:47 localhost snmpd[69161]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB Feb 20 03:09:47 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. Feb 20 03:09:47 localhost systemd[1]: Reloading. Feb 20 03:09:47 localhost systemd-rc-local-generator[69188]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:09:47 localhost systemd-sysv-generator[69191]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:09:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 03:09:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4566 writes, 20K keys, 4566 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4566 writes, 473 syncs, 9.65 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 222 writes, 595 keys, 222 commit groups, 1.0 writes per commit group, ingest: 0.60 MB, 0.00 MB/s#012Interval WAL: 222 writes, 109 syncs, 2.04 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 03:09:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:09:47 localhost systemd[1]: Reloading. Feb 20 03:09:47 localhost systemd-sysv-generator[69227]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:09:47 localhost systemd-rc-local-generator[69221]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:09:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:09:48 localhost puppet-user[67738]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running' Feb 20 03:09:48 localhost puppet-user[67738]: Notice: Applied catalog in 15.21 seconds Feb 20 03:09:48 localhost puppet-user[67738]: Application: Feb 20 03:09:48 localhost puppet-user[67738]: Initial environment: production Feb 20 03:09:48 localhost puppet-user[67738]: Converged environment: production Feb 20 03:09:48 localhost puppet-user[67738]: Run mode: user Feb 20 03:09:48 localhost puppet-user[67738]: Changes: Feb 20 03:09:48 localhost puppet-user[67738]: Total: 8 Feb 20 03:09:48 localhost puppet-user[67738]: Events: Feb 20 03:09:48 localhost puppet-user[67738]: Success: 8 Feb 20 03:09:48 localhost puppet-user[67738]: Total: 8 Feb 20 03:09:48 localhost puppet-user[67738]: Resources: Feb 20 03:09:48 localhost puppet-user[67738]: Restarted: 1 Feb 20 03:09:48 localhost puppet-user[67738]: Changed: 8 Feb 20 03:09:48 localhost puppet-user[67738]: Out of sync: 8 Feb 20 03:09:48 localhost puppet-user[67738]: Total: 19 Feb 20 03:09:48 localhost puppet-user[67738]: Time: Feb 20 03:09:48 localhost puppet-user[67738]: Filebucket: 0.00 Feb 20 03:09:48 localhost puppet-user[67738]: Schedule: 0.00 Feb 20 03:09:48 localhost puppet-user[67738]: Augeas: 0.01 Feb 20 03:09:48 localhost puppet-user[67738]: File: 0.08 Feb 20 03:09:48 localhost puppet-user[67738]: Config retrieval: 0.26 Feb 20 03:09:48 localhost puppet-user[67738]: Service: 1.16 Feb 20 03:09:48 localhost puppet-user[67738]: Transaction evaluation: 15.20 Feb 20 03:09:48 localhost puppet-user[67738]: Catalog application: 15.21 Feb 20 03:09:48 localhost puppet-user[67738]: Last run: 1771574988 Feb 20 03:09:48 localhost puppet-user[67738]: Exec: 5.08 Feb 20 03:09:48 localhost puppet-user[67738]: Package: 8.71 Feb 20 03:09:48 localhost puppet-user[67738]: Total: 15.21 Feb 20 03:09:48 localhost puppet-user[67738]: Version: Feb 20 03:09:48 localhost puppet-user[67738]: Config: 1771574972 Feb 20 03:09:48 localhost puppet-user[67738]: Puppet: 7.10.0 Feb 20 03:09:48 localhost ansible-async_wrapper.py[67731]: Module complete (67731) Feb 20 03:09:48 localhost ansible-async_wrapper.py[67730]: Done in kid B. Feb 20 03:09:49 localhost python3[69251]: ansible-ansible.legacy.async_status Invoked with jid=349055278898.67727 mode=status _async_dir=/tmp/.ansible_async Feb 20 03:09:50 localhost python3[69267]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 03:09:50 localhost python3[69283]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:09:51 localhost python3[69333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:09:51 localhost python3[69351]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpydfiisjo recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 03:09:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 03:09:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5009 writes, 22K keys, 5009 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5009 writes, 566 syncs, 8.85 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 190 writes, 561 keys, 190 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s#012Interval WAL: 190 writes, 94 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 03:09:52 localhost python3[69381]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:09:53 localhost python3[69484]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 20 03:09:53 localhost python3[69503]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:09:54 localhost python3[69535]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:09:55 localhost python3[69585]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:09:55 localhost python3[69603]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:09:56 localhost python3[69665]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:09:56 localhost python3[69683]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:09:57 localhost python3[69745]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:09:57 localhost python3[69763]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:09:57 localhost python3[69825]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:09:58 localhost python3[69843]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:09:58 localhost python3[69873]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:09:58 localhost systemd[1]: Reloading. Feb 20 03:09:58 localhost systemd-rc-local-generator[69902]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:09:58 localhost systemd-sysv-generator[69905]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:09:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:09:59 localhost python3[69960]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:09:59 localhost python3[69978]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:09:59 localhost sshd[69990]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:10:00 localhost python3[70042]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:10:00 localhost sshd[70061]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:10:00 localhost python3[70060]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:01 localhost python3[70092]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:10:01 localhost systemd[1]: Reloading. Feb 20 03:10:01 localhost systemd-rc-local-generator[70146]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:10:01 localhost systemd-sysv-generator[70149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:10:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:10:01 localhost systemd[1]: Starting Create netns directory... Feb 20 03:10:01 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 20 03:10:01 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 20 03:10:01 localhost systemd[1]: Finished Create netns directory. Feb 20 03:10:02 localhost python3[70196]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 20 03:10:03 localhost podman[70377]: Feb 20 03:10:03 localhost podman[70377]: 2026-02-20 08:10:03.421462397 +0000 UTC m=+0.054470510 container create b5898e8c9829ad6361b57431f3624ce9ed4a7f414c95fa6e2495dc89a3103919 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_shtern, ceph=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.42.2, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, release=1770267347, GIT_CLEAN=True, com.redhat.component=rhceph-container) Feb 20 03:10:03 localhost systemd[1]: Started libpod-conmon-b5898e8c9829ad6361b57431f3624ce9ed4a7f414c95fa6e2495dc89a3103919.scope. Feb 20 03:10:03 localhost systemd[1]: Started libcrun container. Feb 20 03:10:03 localhost podman[70377]: 2026-02-20 08:10:03.485596108 +0000 UTC m=+0.118604211 container init b5898e8c9829ad6361b57431f3624ce9ed4a7f414c95fa6e2495dc89a3103919 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_shtern, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, build-date=2026-02-09T10:25:24Z, release=1770267347, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2) Feb 20 03:10:03 localhost podman[70377]: 2026-02-20 08:10:03.495986439 +0000 UTC m=+0.128994512 container start b5898e8c9829ad6361b57431f3624ce9ed4a7f414c95fa6e2495dc89a3103919 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_shtern, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347) Feb 20 03:10:03 localhost podman[70377]: 2026-02-20 08:10:03.496175994 +0000 UTC m=+0.129184077 container attach b5898e8c9829ad6361b57431f3624ce9ed4a7f414c95fa6e2495dc89a3103919 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_shtern, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vendor=Red Hat, Inc., release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, vcs-type=git, version=7, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 03:10:03 localhost podman[70377]: 2026-02-20 08:10:03.397615996 +0000 UTC m=+0.030624129 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 03:10:03 localhost brave_shtern[70391]: 167 167 Feb 20 03:10:03 localhost systemd[1]: libpod-b5898e8c9829ad6361b57431f3624ce9ed4a7f414c95fa6e2495dc89a3103919.scope: Deactivated successfully. Feb 20 03:10:03 localhost podman[70377]: 2026-02-20 08:10:03.499855229 +0000 UTC m=+0.132863352 container died b5898e8c9829ad6361b57431f3624ce9ed4a7f414c95fa6e2495dc89a3103919 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_shtern, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.buildah.version=1.42.2, vcs-type=git, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, version=7, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., name=rhceph, ceph=True) Feb 20 03:10:03 localhost podman[70396]: 2026-02-20 08:10:03.588308574 +0000 UTC m=+0.079430350 container remove b5898e8c9829ad6361b57431f3624ce9ed4a7f414c95fa6e2495dc89a3103919 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_shtern, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., release=1770267347, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git) Feb 20 03:10:03 localhost systemd[1]: libpod-conmon-b5898e8c9829ad6361b57431f3624ce9ed4a7f414c95fa6e2495dc89a3103919.scope: Deactivated successfully. Feb 20 03:10:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:10:03 localhost podman[70428]: Feb 20 03:10:03 localhost podman[70428]: 2026-02-20 08:10:03.767921444 +0000 UTC m=+0.072781277 container create ea59110ef1db7d28138ef6a3dba4163a7a489e4144aa8d20cd4a44acd93f0653 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_germain, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container, version=7, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, release=1770267347) Feb 20 03:10:03 localhost systemd[1]: Started libpod-conmon-ea59110ef1db7d28138ef6a3dba4163a7a489e4144aa8d20cd4a44acd93f0653.scope. Feb 20 03:10:03 localhost systemd[1]: Started libcrun container. Feb 20 03:10:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c964e33ff3bc604745b37cd19e24a1e6d21589a689d0d8ada23ab4cf1edc754/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c964e33ff3bc604745b37cd19e24a1e6d21589a689d0d8ada23ab4cf1edc754/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c964e33ff3bc604745b37cd19e24a1e6d21589a689d0d8ada23ab4cf1edc754/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:03 localhost podman[70428]: 2026-02-20 08:10:03.823532994 +0000 UTC m=+0.128392797 container init ea59110ef1db7d28138ef6a3dba4163a7a489e4144aa8d20cd4a44acd93f0653 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_germain, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7) Feb 20 03:10:03 localhost podman[70428]: 2026-02-20 08:10:03.829476968 +0000 UTC m=+0.134336771 container start ea59110ef1db7d28138ef6a3dba4163a7a489e4144aa8d20cd4a44acd93f0653 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_germain, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, name=rhceph, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, architecture=x86_64, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., ceph=True, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 03:10:03 localhost podman[70428]: 2026-02-20 08:10:03.829587781 +0000 UTC m=+0.134447584 container attach ea59110ef1db7d28138ef6a3dba4163a7a489e4144aa8d20cd4a44acd93f0653 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_germain, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.42.2, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, version=7, io.openshift.tags=rhceph ceph) Feb 20 03:10:03 localhost podman[70445]: 2026-02-20 08:10:03.832424375 +0000 UTC m=+0.074546534 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=) Feb 20 03:10:03 localhost podman[70428]: 2026-02-20 08:10:03.737410109 +0000 UTC m=+0.042269972 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 03:10:03 localhost python3[70446]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 20 03:10:03 localhost podman[70445]: 2026-02-20 08:10:03.993752299 +0000 UTC m=+0.235874448 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step1, tcib_managed=true, architecture=x86_64, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr) Feb 20 03:10:04 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:10:04 localhost podman[70643]: 2026-02-20 08:10:04.210100986 +0000 UTC m=+0.060060056 container create e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1766032510) Feb 20 03:10:04 localhost podman[70646]: 2026-02-20 08:10:04.234464151 +0000 UTC m=+0.080901999 container create 87de2261dbed224238838063fbdef22aabe2e5758eef148396a56ae28f95c9d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=configure_cms_options, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:10:04 localhost systemd[1]: Started libpod-conmon-e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.scope. Feb 20 03:10:04 localhost systemd[1]: Started libcrun container. Feb 20 03:10:04 localhost podman[70657]: 2026-02-20 08:10:04.257007949 +0000 UTC m=+0.097113412 container create df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.buildah.version=1.41.5, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64) Feb 20 03:10:04 localhost systemd[1]: Started libpod-conmon-87de2261dbed224238838063fbdef22aabe2e5758eef148396a56ae28f95c9d5.scope. Feb 20 03:10:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635be08eaf6b0b44e0a79953079e91b6976bdd54582ce4f8e556143cd0e1a390/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:04 localhost systemd[1]: Started libcrun container. Feb 20 03:10:04 localhost podman[70676]: 2026-02-20 08:10:04.274116144 +0000 UTC m=+0.101849085 container create 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Feb 20 03:10:04 localhost podman[70643]: 2026-02-20 08:10:04.177747273 +0000 UTC m=+0.027706373 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Feb 20 03:10:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:10:04 localhost podman[70643]: 2026-02-20 08:10:04.279958166 +0000 UTC m=+0.129917256 container init e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, batch=17.1_20260112.1, config_id=tripleo_step4, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git) Feb 20 03:10:04 localhost podman[70646]: 2026-02-20 08:10:04.280333256 +0000 UTC m=+0.126771094 container init 87de2261dbed224238838063fbdef22aabe2e5758eef148396a56ae28f95c9d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, container_name=configure_cms_options, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.5, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13) Feb 20 03:10:04 localhost podman[70691]: 2026-02-20 08:10:04.285399968 +0000 UTC m=+0.092026339 container create 9a4db6912ae4e51dd839860c18ddbd531a5d6935cee2e97b37faf648a522a106 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-libvirt, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, container_name=nova_libvirt_init_secret, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:31:49Z, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:10:04 localhost podman[70646]: 2026-02-20 08:10:04.18760023 +0000 UTC m=+0.034038078 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 20 03:10:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:10:04 localhost systemd[1]: Started libpod-conmon-9a4db6912ae4e51dd839860c18ddbd531a5d6935cee2e97b37faf648a522a106.scope. Feb 20 03:10:04 localhost podman[70676]: 2026-02-20 08:10:04.206851462 +0000 UTC m=+0.034584403 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Feb 20 03:10:04 localhost podman[70643]: 2026-02-20 08:10:04.310161674 +0000 UTC m=+0.160120734 container start e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.13, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5) Feb 20 03:10:04 localhost python3[70446]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=8cdce88e823976bbaa6aae3526d6d0ab --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Feb 20 03:10:04 localhost systemd[1]: Started libcrun container. Feb 20 03:10:04 localhost podman[70657]: 2026-02-20 08:10:04.223198178 +0000 UTC m=+0.063303651 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 20 03:10:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8868cac218f491d9c5355b2652ed5c5592a7b8258366a462c07a0c48d3255bb/merged/etc/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8868cac218f491d9c5355b2652ed5c5592a7b8258366a462c07a0c48d3255bb/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8868cac218f491d9c5355b2652ed5c5592a7b8258366a462c07a0c48d3255bb/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:04 localhost podman[70691]: 2026-02-20 08:10:04.224084511 +0000 UTC m=+0.030710912 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Feb 20 03:10:04 localhost podman[70691]: 2026-02-20 08:10:04.334706322 +0000 UTC m=+0.141332703 container init 9a4db6912ae4e51dd839860c18ddbd531a5d6935cee2e97b37faf648a522a106 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, build-date=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, managed_by=tripleo_ansible, container_name=nova_libvirt_init_secret, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:10:04 localhost podman[70646]: 2026-02-20 08:10:04.339648061 +0000 UTC m=+0.186085919 container start 87de2261dbed224238838063fbdef22aabe2e5758eef148396a56ae28f95c9d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, release=1766032510, url=https://www.redhat.com, container_name=configure_cms_options, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:10:04 localhost systemd[1]: Started libpod-conmon-df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.scope. Feb 20 03:10:04 localhost podman[70646]: 2026-02-20 08:10:04.340770151 +0000 UTC m=+0.187208009 container attach 87de2261dbed224238838063fbdef22aabe2e5758eef148396a56ae28f95c9d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=configure_cms_options, build-date=2026-01-12T22:36:40Z, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vendor=Red Hat, Inc., release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, tcib_managed=true) Feb 20 03:10:04 localhost podman[70691]: 2026-02-20 08:10:04.347464895 +0000 UTC m=+0.154091286 container start 9a4db6912ae4e51dd839860c18ddbd531a5d6935cee2e97b37faf648a522a106 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, container_name=nova_libvirt_init_secret, release=1766032510, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:10:04 localhost podman[70691]: 2026-02-20 08:10:04.348966774 +0000 UTC m=+0.155593185 container attach 9a4db6912ae4e51dd839860c18ddbd531a5d6935cee2e97b37faf648a522a106 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, container_name=nova_libvirt_init_secret, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, version=17.1.13, architecture=x86_64, build-date=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:10:04 localhost systemd[1]: Started libcrun container. Feb 20 03:10:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37e8585e5e1c303b8917b31d26f97b6e36b93752231bf9ec3eb7d712b5a3738/merged/var/log/containers supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:04 localhost ovs-vsctl[71243]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options Feb 20 03:10:04 localhost podman[71041]: 2026-02-20 08:10:04.377417646 +0000 UTC m=+0.064633645 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, version=17.1.13, architecture=x86_64, release=1766032510, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc.) Feb 20 03:10:04 localhost systemd[1]: libpod-87de2261dbed224238838063fbdef22aabe2e5758eef148396a56ae28f95c9d5.scope: Deactivated successfully. Feb 20 03:10:04 localhost podman[70646]: 2026-02-20 08:10:04.38062799 +0000 UTC m=+0.227065848 container died 87de2261dbed224238838063fbdef22aabe2e5758eef148396a56ae28f95c9d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, release=1766032510, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=configure_cms_options, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller) Feb 20 03:10:04 localhost podman[71041]: 2026-02-20 08:10:04.392599751 +0000 UTC m=+0.079815740 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, tcib_managed=true, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:10:04 localhost podman[71041]: unhealthy Feb 20 03:10:04 localhost systemd[1]: Started libpod-conmon-8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.scope. Feb 20 03:10:04 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:10:04 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Failed with result 'exit-code'. Feb 20 03:10:04 localhost systemd[1]: Started libcrun container. Feb 20 03:10:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f49cf732328da45c284af0adbec71fb49c656a125ef86cd2b520f315e6df9bd/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:04 localhost systemd[1]: var-lib-containers-storage-overlay-3f17ede6344bdea61843d05d5292eb161b20cc2c8cd9836bb6636179d5855b0a-merged.mount: Deactivated successfully. Feb 20 03:10:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:10:04 localhost podman[70657]: 2026-02-20 08:10:04.437526712 +0000 UTC m=+0.277632205 container init df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-type=git, container_name=logrotate_crond, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:10:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:10:04 localhost systemd[1]: var-lib-containers-storage-overlay-bf94b6a6e9522c9fc73100c79a12d1e689f4ed03546c3da10b653f25c168420d-merged.mount: Deactivated successfully. Feb 20 03:10:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87de2261dbed224238838063fbdef22aabe2e5758eef148396a56ae28f95c9d5-userdata-shm.mount: Deactivated successfully. Feb 20 03:10:04 localhost systemd[1]: libpod-9a4db6912ae4e51dd839860c18ddbd531a5d6935cee2e97b37faf648a522a106.scope: Deactivated successfully. Feb 20 03:10:04 localhost podman[70657]: 2026-02-20 08:10:04.472825882 +0000 UTC m=+0.312931335 container start df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, io.buildah.version=1.41.5, url=https://www.redhat.com, version=17.1.13) Feb 20 03:10:04 localhost python3[70446]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Feb 20 03:10:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:10:04 localhost podman[71313]: 2026-02-20 08:10:04.495473912 +0000 UTC m=+0.102902932 container cleanup 87de2261dbed224238838063fbdef22aabe2e5758eef148396a56ae28f95c9d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, container_name=configure_cms_options, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, architecture=x86_64, vendor=Red Hat, Inc.) Feb 20 03:10:04 localhost podman[70676]: 2026-02-20 08:10:04.49844846 +0000 UTC m=+0.326181411 container init 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git) Feb 20 03:10:04 localhost systemd[1]: libpod-conmon-87de2261dbed224238838063fbdef22aabe2e5758eef148396a56ae28f95c9d5.scope: Deactivated successfully. Feb 20 03:10:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:10:04 localhost podman[70691]: 2026-02-20 08:10:04.529639862 +0000 UTC m=+0.336266233 container died 9a4db6912ae4e51dd839860c18ddbd531a5d6935cee2e97b37faf648a522a106 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_libvirt_init_secret, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1) Feb 20 03:10:04 localhost python3[70446]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1771573229 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi Feb 20 03:10:04 localhost podman[70676]: 2026-02-20 08:10:04.573783313 +0000 UTC m=+0.401516244 container start 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container) Feb 20 03:10:04 localhost python3[70446]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=8cdce88e823976bbaa6aae3526d6d0ab --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Feb 20 03:10:04 localhost podman[71593]: 2026-02-20 08:10:04.601172025 +0000 UTC m=+0.130274564 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13) Feb 20 03:10:04 localhost podman[71593]: 2026-02-20 08:10:04.604536564 +0000 UTC m=+0.133639093 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, vcs-type=git, container_name=logrotate_crond, config_id=tripleo_step4, release=1766032510, version=17.1.13, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:10:04 localhost podman[71675]: 2026-02-20 08:10:04.617287316 +0000 UTC m=+0.133971972 container cleanup 9a4db6912ae4e51dd839860c18ddbd531a5d6935cee2e97b37faf648a522a106 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_libvirt_init_secret, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, version=17.1.13, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Feb 20 03:10:04 localhost systemd[1]: libpod-conmon-9a4db6912ae4e51dd839860c18ddbd531a5d6935cee2e97b37faf648a522a106.scope: Deactivated successfully. Feb 20 03:10:04 localhost python3[70446]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=ca9e756af36a4b8ed088db0b68d5c381 --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack Feb 20 03:10:04 localhost podman[71784]: 2026-02-20 08:10:04.598835365 +0000 UTC m=+0.076503905 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, vcs-type=git, version=17.1.13, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:10:04 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:10:04 localhost podman[71784]: 2026-02-20 08:10:04.680596236 +0000 UTC m=+0.158264796 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:10:04 localhost podman[71784]: unhealthy Feb 20 03:10:04 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:10:04 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Failed with result 'exit-code'. Feb 20 03:10:04 localhost competent_germain[70458]: [ Feb 20 03:10:04 localhost competent_germain[70458]: { Feb 20 03:10:04 localhost competent_germain[70458]: "available": false, Feb 20 03:10:04 localhost competent_germain[70458]: "ceph_device": false, Feb 20 03:10:04 localhost competent_germain[70458]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 20 03:10:04 localhost competent_germain[70458]: "lsm_data": {}, Feb 20 03:10:04 localhost competent_germain[70458]: "lvs": [], Feb 20 03:10:04 localhost competent_germain[70458]: "path": "/dev/sr0", Feb 20 03:10:04 localhost competent_germain[70458]: "rejected_reasons": [ Feb 20 03:10:04 localhost competent_germain[70458]: "Insufficient space (<5GB)", Feb 20 03:10:04 localhost competent_germain[70458]: "Has a FileSystem" Feb 20 03:10:04 localhost competent_germain[70458]: ], Feb 20 03:10:04 localhost competent_germain[70458]: "sys_api": { Feb 20 03:10:04 localhost competent_germain[70458]: "actuators": null, Feb 20 03:10:04 localhost competent_germain[70458]: "device_nodes": "sr0", Feb 20 03:10:04 localhost competent_germain[70458]: "human_readable_size": "482.00 KB", Feb 20 03:10:04 localhost competent_germain[70458]: "id_bus": "ata", Feb 20 03:10:04 localhost competent_germain[70458]: "model": "QEMU DVD-ROM", Feb 20 03:10:04 localhost competent_germain[70458]: "nr_requests": "2", Feb 20 03:10:04 localhost competent_germain[70458]: "partitions": {}, Feb 20 03:10:04 localhost competent_germain[70458]: "path": "/dev/sr0", Feb 20 03:10:04 localhost competent_germain[70458]: "removable": "1", Feb 20 03:10:04 localhost competent_germain[70458]: "rev": "2.5+", Feb 20 03:10:04 localhost competent_germain[70458]: "ro": "0", Feb 20 03:10:04 localhost competent_germain[70458]: "rotational": "1", Feb 20 03:10:04 localhost competent_germain[70458]: "sas_address": "", Feb 20 03:10:04 localhost competent_germain[70458]: "sas_device_handle": "", Feb 20 03:10:04 localhost competent_germain[70458]: "scheduler_mode": "mq-deadline", Feb 20 03:10:04 localhost competent_germain[70458]: "sectors": 0, Feb 20 03:10:04 localhost competent_germain[70458]: "sectorsize": "2048", Feb 20 03:10:04 localhost competent_germain[70458]: "size": 493568.0, Feb 20 03:10:04 localhost competent_germain[70458]: "support_discard": "0", Feb 20 03:10:04 localhost competent_germain[70458]: "type": "disk", Feb 20 03:10:04 localhost competent_germain[70458]: "vendor": "QEMU" Feb 20 03:10:04 localhost competent_germain[70458]: } Feb 20 03:10:04 localhost competent_germain[70458]: } Feb 20 03:10:04 localhost competent_germain[70458]: ] Feb 20 03:10:04 localhost systemd[1]: libpod-ea59110ef1db7d28138ef6a3dba4163a7a489e4144aa8d20cd4a44acd93f0653.scope: Deactivated successfully. Feb 20 03:10:04 localhost podman[70428]: 2026-02-20 08:10:04.751328799 +0000 UTC m=+1.056188602 container died ea59110ef1db7d28138ef6a3dba4163a7a489e4144aa8d20cd4a44acd93f0653 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_germain, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public) Feb 20 03:10:04 localhost podman[72652]: 2026-02-20 08:10:04.84658181 +0000 UTC m=+0.056558964 container create cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, maintainer=OpenStack TripleO Team) Feb 20 03:10:04 localhost podman[72641]: 2026-02-20 08:10:04.883868202 +0000 UTC m=+0.118163750 container remove ea59110ef1db7d28138ef6a3dba4163a7a489e4144aa8d20cd4a44acd93f0653 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_germain, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, io.buildah.version=1.42.2, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7) Feb 20 03:10:04 localhost systemd[1]: Started libpod-conmon-cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.scope. Feb 20 03:10:04 localhost systemd[1]: libpod-conmon-ea59110ef1db7d28138ef6a3dba4163a7a489e4144aa8d20cd4a44acd93f0653.scope: Deactivated successfully. Feb 20 03:10:04 localhost systemd[1]: Started libcrun container. Feb 20 03:10:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46380c3a391236151e6a3f7a86710e438476b0174e52a2400f0bf907fb1c5d80/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:04 localhost podman[72652]: 2026-02-20 08:10:04.819715871 +0000 UTC m=+0.029693035 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 20 03:10:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:10:04 localhost podman[72652]: 2026-02-20 08:10:04.93140274 +0000 UTC m=+0.141379904 container init cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1766032510, version=17.1.13, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, vcs-type=git, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 20 03:10:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:10:04 localhost podman[72652]: 2026-02-20 08:10:04.961957377 +0000 UTC m=+0.171934551 container start cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, release=1766032510) Feb 20 03:10:04 localhost python3[70446]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ca9e756af36a4b8ed088db0b68d5c381 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 20 03:10:04 localhost podman[72683]: 2026-02-20 08:10:04.980947482 +0000 UTC m=+0.123625343 container create 38c2cd2302b50f8747f804fa985829bfc2b30d46ebadbfe33a8131daf6bb6233 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=setup_ovs_manager, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, distribution-scope=public, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vendor=Red Hat, Inc., release=1766032510) Feb 20 03:10:05 localhost systemd[1]: Started libpod-conmon-38c2cd2302b50f8747f804fa985829bfc2b30d46ebadbfe33a8131daf6bb6233.scope. Feb 20 03:10:05 localhost systemd[1]: Started libcrun container. Feb 20 03:10:05 localhost podman[72683]: 2026-02-20 08:10:04.938969278 +0000 UTC m=+0.081647129 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 20 03:10:05 localhost podman[72708]: 2026-02-20 08:10:05.039922138 +0000 UTC m=+0.072045288 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, version=17.1.13) Feb 20 03:10:05 localhost podman[72683]: 2026-02-20 08:10:05.051983672 +0000 UTC m=+0.194661503 container init 38c2cd2302b50f8747f804fa985829bfc2b30d46ebadbfe33a8131daf6bb6233 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, container_name=setup_ovs_manager, managed_by=tripleo_ansible, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, url=https://www.redhat.com, release=1766032510, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Feb 20 03:10:05 localhost podman[72683]: 2026-02-20 08:10:05.064038637 +0000 UTC m=+0.206716458 container start 38c2cd2302b50f8747f804fa985829bfc2b30d46ebadbfe33a8131daf6bb6233 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public) Feb 20 03:10:05 localhost podman[72683]: 2026-02-20 08:10:05.06414797 +0000 UTC m=+0.206825801 container attach 38c2cd2302b50f8747f804fa985829bfc2b30d46ebadbfe33a8131daf6bb6233 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., container_name=setup_ovs_manager, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, batch=17.1_20260112.1, version=17.1.13, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:10:05 localhost podman[72708]: 2026-02-20 08:10:05.424810608 +0000 UTC m=+0.456933738 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, release=1766032510) Feb 20 03:10:05 localhost systemd[1]: var-lib-containers-storage-overlay-c8868cac218f491d9c5355b2652ed5c5592a7b8258366a462c07a0c48d3255bb-merged.mount: Deactivated successfully. Feb 20 03:10:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a4db6912ae4e51dd839860c18ddbd531a5d6935cee2e97b37faf648a522a106-userdata-shm.mount: Deactivated successfully. Feb 20 03:10:05 localhost systemd[1]: var-lib-containers-storage-overlay-8c964e33ff3bc604745b37cd19e24a1e6d21589a689d0d8ada23ab4cf1edc754-merged.mount: Deactivated successfully. Feb 20 03:10:05 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:10:05 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure Feb 20 03:10:06 localhost sshd[72821]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:10:07 localhost sshd[72902]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:10:07 localhost ovs-vsctl[72911]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager Feb 20 03:10:08 localhost systemd[1]: libpod-38c2cd2302b50f8747f804fa985829bfc2b30d46ebadbfe33a8131daf6bb6233.scope: Deactivated successfully. Feb 20 03:10:08 localhost systemd[1]: libpod-38c2cd2302b50f8747f804fa985829bfc2b30d46ebadbfe33a8131daf6bb6233.scope: Consumed 2.865s CPU time. Feb 20 03:10:08 localhost podman[72912]: 2026-02-20 08:10:08.080769882 +0000 UTC m=+0.051148144 container died 38c2cd2302b50f8747f804fa985829bfc2b30d46ebadbfe33a8131daf6bb6233 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 20 03:10:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-38c2cd2302b50f8747f804fa985829bfc2b30d46ebadbfe33a8131daf6bb6233-userdata-shm.mount: Deactivated successfully. Feb 20 03:10:08 localhost systemd[1]: var-lib-containers-storage-overlay-3a00b03637ce1c9d5994170186ddbc87eaa07f2bbff7f9cde79bca73a3a75a17-merged.mount: Deactivated successfully. Feb 20 03:10:08 localhost podman[72912]: 2026-02-20 08:10:08.123945347 +0000 UTC m=+0.094323559 container cleanup 38c2cd2302b50f8747f804fa985829bfc2b30d46ebadbfe33a8131daf6bb6233 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, container_name=setup_ovs_manager, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 20 03:10:08 localhost systemd[1]: libpod-conmon-38c2cd2302b50f8747f804fa985829bfc2b30d46ebadbfe33a8131daf6bb6233.scope: Deactivated successfully. Feb 20 03:10:08 localhost python3[70446]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1771573229 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata Feb 20 03:10:08 localhost podman[73028]: 2026-02-20 08:10:08.594781195 +0000 UTC m=+0.092050179 container create 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container) Feb 20 03:10:08 localhost podman[73034]: 2026-02-20 08:10:08.62757364 +0000 UTC m=+0.109210557 container create 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true) Feb 20 03:10:08 localhost podman[73028]: 2026-02-20 08:10:08.549111655 +0000 UTC m=+0.046380679 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 20 03:10:08 localhost systemd[1]: Started libpod-conmon-5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.scope. Feb 20 03:10:08 localhost systemd[1]: Started libpod-conmon-34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.scope. Feb 20 03:10:08 localhost podman[73034]: 2026-02-20 08:10:08.562581587 +0000 UTC m=+0.044218574 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 20 03:10:08 localhost systemd[1]: Started libcrun container. Feb 20 03:10:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e301de541839455328f58c15918525438472e11d40dc3ec7722a3fc836c44350/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e301de541839455328f58c15918525438472e11d40dc3ec7722a3fc836c44350/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e301de541839455328f58c15918525438472e11d40dc3ec7722a3fc836c44350/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:08 localhost systemd[1]: Started libcrun container. Feb 20 03:10:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3f94cec83e204d6816769a4a7f242e660c4b352d77cc80230032ff7b68a014/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3f94cec83e204d6816769a4a7f242e660c4b352d77cc80230032ff7b68a014/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3f94cec83e204d6816769a4a7f242e660c4b352d77cc80230032ff7b68a014/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff) Feb 20 03:10:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:10:08 localhost podman[73028]: 2026-02-20 08:10:08.704184426 +0000 UTC m=+0.201453420 container init 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container) Feb 20 03:10:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:10:08 localhost podman[73034]: 2026-02-20 08:10:08.721452666 +0000 UTC m=+0.203089613 container init 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, version=17.1.13, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510) Feb 20 03:10:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:10:08 localhost podman[73028]: 2026-02-20 08:10:08.739204699 +0000 UTC m=+0.236473693 container start 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, io.openshift.expose-services=, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vendor=Red Hat, Inc.) Feb 20 03:10:08 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Feb 20 03:10:08 localhost systemd[1]: Created slice User Slice of UID 0. Feb 20 03:10:08 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Feb 20 03:10:08 localhost python3[70446]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Feb 20 03:10:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:10:08 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Feb 20 03:10:08 localhost podman[73034]: 2026-02-20 08:10:08.786892861 +0000 UTC m=+0.268529808 container start 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:10:08 localhost systemd[1]: Starting User Manager for UID 0... Feb 20 03:10:08 localhost python3[70446]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=85da22c155c014a1a90b143a817b4401 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Feb 20 03:10:08 localhost podman[73070]: 2026-02-20 08:10:08.848099006 +0000 UTC m=+0.099164935 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:10:08 localhost systemd[73091]: Queued start job for default target Main User Target. Feb 20 03:10:08 localhost systemd[73091]: Created slice User Application Slice. Feb 20 03:10:08 localhost systemd[73091]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Feb 20 03:10:08 localhost systemd[73091]: Started Daily Cleanup of User's Temporary Directories. Feb 20 03:10:08 localhost systemd[73091]: Reached target Paths. Feb 20 03:10:08 localhost systemd[73091]: Reached target Timers. Feb 20 03:10:08 localhost systemd[73091]: Starting D-Bus User Message Bus Socket... Feb 20 03:10:08 localhost systemd[73091]: Starting Create User's Volatile Files and Directories... Feb 20 03:10:08 localhost systemd[73091]: Listening on D-Bus User Message Bus Socket. Feb 20 03:10:08 localhost systemd[73091]: Reached target Sockets. Feb 20 03:10:08 localhost systemd[73091]: Finished Create User's Volatile Files and Directories. Feb 20 03:10:08 localhost systemd[73091]: Reached target Basic System. Feb 20 03:10:08 localhost systemd[73091]: Reached target Main User Target. Feb 20 03:10:08 localhost systemd[73091]: Startup finished in 126ms. Feb 20 03:10:08 localhost systemd[1]: Started User Manager for UID 0. Feb 20 03:10:08 localhost systemd[1]: Started Session c9 of User root. Feb 20 03:10:08 localhost podman[73086]: 2026-02-20 08:10:08.929777354 +0000 UTC m=+0.136358684 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:10:09 localhost podman[73086]: 2026-02-20 08:10:09.011594876 +0000 UTC m=+0.218176166 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., batch=17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:10:09 localhost systemd[1]: session-c9.scope: Deactivated successfully. Feb 20 03:10:09 localhost podman[73086]: unhealthy Feb 20 03:10:09 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:10:09 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:10:09 localhost podman[73070]: 2026-02-20 08:10:09.057616265 +0000 UTC m=+0.308682234 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510) Feb 20 03:10:09 localhost podman[73070]: unhealthy Feb 20 03:10:09 localhost kernel: device br-int entered promiscuous mode Feb 20 03:10:09 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:10:09 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:10:09 localhost NetworkManager[5967]: [1771575009.0764] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Feb 20 03:10:09 localhost systemd-udevd[73187]: Network interface NamePolicy= disabled on kernel command line. Feb 20 03:10:09 localhost python3[73207]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:10:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:10:09 localhost podman[73223]: 2026-02-20 08:10:09.778014446 +0000 UTC m=+0.070279822 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.13, architecture=x86_64, release=1766032510, tcib_managed=true, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=iscsid) Feb 20 03:10:09 localhost podman[73223]: 2026-02-20 08:10:09.815750969 +0000 UTC m=+0.108016405 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, container_name=iscsid, version=17.1.13) Feb 20 03:10:09 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:10:09 localhost podman[73224]: 2026-02-20 08:10:09.837988289 +0000 UTC m=+0.129880165 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.13, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.expose-services=) Feb 20 03:10:09 localhost podman[73224]: 2026-02-20 08:10:09.875716102 +0000 UTC m=+0.167608018 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, version=17.1.13, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:10:09 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:10:09 localhost python3[73225]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:10 localhost python3[73277]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:10 localhost kernel: device genev_sys_6081 entered promiscuous mode Feb 20 03:10:10 localhost systemd-udevd[73189]: Network interface NamePolicy= disabled on kernel command line. Feb 20 03:10:10 localhost NetworkManager[5967]: [1771575010.2930] device (genev_sys_6081): carrier: link connected Feb 20 03:10:10 localhost NetworkManager[5967]: [1771575010.2936] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12) Feb 20 03:10:10 localhost python3[73296]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:10 localhost python3[73315]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:11 localhost python3[73332]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:11 localhost python3[73349]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:10:11 localhost python3[73366]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:10:11 localhost python3[73384]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:10:11 localhost python3[73400]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:10:12 localhost python3[73416]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:10:12 localhost python3[73432]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:10:13 localhost python3[73493]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575012.5996475-109677-113329523961075/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:13 localhost python3[73522]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575012.5996475-109677-113329523961075/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:14 localhost python3[73551]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575012.5996475-109677-113329523961075/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:14 localhost python3[73580]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575012.5996475-109677-113329523961075/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:15 localhost python3[73610]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575012.5996475-109677-113329523961075/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:15 localhost python3[73639]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575012.5996475-109677-113329523961075/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:16 localhost python3[73655]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 03:10:16 localhost systemd[1]: Reloading. Feb 20 03:10:16 localhost systemd-rc-local-generator[73676]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:10:16 localhost systemd-sysv-generator[73682]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:10:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:10:17 localhost python3[73706]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:10:17 localhost systemd[1]: Reloading. Feb 20 03:10:17 localhost systemd-rc-local-generator[73730]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:10:17 localhost systemd-sysv-generator[73736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:10:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:10:17 localhost systemd[1]: Starting ceilometer_agent_compute container... Feb 20 03:10:17 localhost tripleo-start-podman-container[73746]: Creating additional drop-in dependency for "ceilometer_agent_compute" (8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2) Feb 20 03:10:17 localhost systemd[1]: Reloading. Feb 20 03:10:17 localhost systemd-sysv-generator[73808]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:10:17 localhost systemd-rc-local-generator[73803]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:10:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:10:17 localhost systemd[1]: Started ceilometer_agent_compute container. Feb 20 03:10:18 localhost python3[73829]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:10:18 localhost systemd[1]: Reloading. Feb 20 03:10:18 localhost systemd-rc-local-generator[73856]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:10:18 localhost systemd-sysv-generator[73861]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:10:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:10:18 localhost systemd[1]: Starting ceilometer_agent_ipmi container... Feb 20 03:10:19 localhost systemd[1]: Started ceilometer_agent_ipmi container. Feb 20 03:10:19 localhost systemd[1]: Stopping User Manager for UID 0... Feb 20 03:10:19 localhost systemd[73091]: Activating special unit Exit the Session... Feb 20 03:10:19 localhost systemd[73091]: Stopped target Main User Target. Feb 20 03:10:19 localhost systemd[73091]: Stopped target Basic System. Feb 20 03:10:19 localhost systemd[73091]: Stopped target Paths. Feb 20 03:10:19 localhost systemd[73091]: Stopped target Sockets. Feb 20 03:10:19 localhost systemd[73091]: Stopped target Timers. Feb 20 03:10:19 localhost systemd[73091]: Stopped Daily Cleanup of User's Temporary Directories. Feb 20 03:10:19 localhost systemd[73091]: Closed D-Bus User Message Bus Socket. Feb 20 03:10:19 localhost systemd[73091]: Stopped Create User's Volatile Files and Directories. Feb 20 03:10:19 localhost systemd[73091]: Removed slice User Application Slice. Feb 20 03:10:19 localhost systemd[73091]: Reached target Shutdown. Feb 20 03:10:19 localhost systemd[73091]: Finished Exit the Session. Feb 20 03:10:19 localhost systemd[73091]: Reached target Exit the Session. Feb 20 03:10:19 localhost systemd[1]: user@0.service: Deactivated successfully. Feb 20 03:10:19 localhost systemd[1]: Stopped User Manager for UID 0. Feb 20 03:10:19 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Feb 20 03:10:19 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Feb 20 03:10:19 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Feb 20 03:10:19 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Feb 20 03:10:19 localhost systemd[1]: Removed slice User Slice of UID 0. Feb 20 03:10:19 localhost python3[73896]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:10:19 localhost systemd[1]: Reloading. Feb 20 03:10:19 localhost systemd-sysv-generator[73926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:10:19 localhost systemd-rc-local-generator[73922]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:10:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:10:19 localhost systemd[1]: Starting logrotate_crond container... Feb 20 03:10:20 localhost systemd[1]: Started logrotate_crond container. Feb 20 03:10:20 localhost python3[73962]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:10:20 localhost systemd[1]: Reloading. Feb 20 03:10:20 localhost systemd-sysv-generator[73993]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:10:20 localhost systemd-rc-local-generator[73987]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:10:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:10:21 localhost systemd[1]: Starting nova_migration_target container... Feb 20 03:10:21 localhost systemd[1]: Started nova_migration_target container. Feb 20 03:10:21 localhost python3[74029]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:10:21 localhost systemd[1]: Reloading. Feb 20 03:10:21 localhost systemd-rc-local-generator[74053]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:10:21 localhost systemd-sysv-generator[74058]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:10:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:10:22 localhost systemd[1]: Starting ovn_controller container... Feb 20 03:10:22 localhost sshd[74094]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:10:22 localhost tripleo-start-podman-container[74068]: Creating additional drop-in dependency for "ovn_controller" (5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367) Feb 20 03:10:22 localhost systemd[1]: Reloading. Feb 20 03:10:22 localhost systemd-sysv-generator[74129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:10:22 localhost systemd-rc-local-generator[74125]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:10:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:10:22 localhost systemd[1]: Started ovn_controller container. Feb 20 03:10:23 localhost python3[74151]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:10:23 localhost systemd[1]: Reloading. Feb 20 03:10:23 localhost systemd-rc-local-generator[74178]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:10:23 localhost systemd-sysv-generator[74181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:10:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:10:23 localhost systemd[1]: Starting ovn_metadata_agent container... Feb 20 03:10:23 localhost systemd[1]: Started ovn_metadata_agent container. Feb 20 03:10:24 localhost python3[74232]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:25 localhost python3[74353]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005625202 step=4 update_config_hash_only=False Feb 20 03:10:26 localhost python3[74369]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:10:26 localhost python3[74385]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 20 03:10:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:10:34 localhost systemd[1]: tmp-crun.T8738O.mount: Deactivated successfully. Feb 20 03:10:34 localhost podman[74388]: 2026-02-20 08:10:34.458291737 +0000 UTC m=+0.095854129 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.buildah.version=1.41.5, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:10:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:10:34 localhost systemd[1]: tmp-crun.VUqjEa.mount: Deactivated successfully. Feb 20 03:10:34 localhost podman[74410]: 2026-02-20 08:10:34.578656063 +0000 UTC m=+0.099406051 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1766032510, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 20 03:10:34 localhost podman[74410]: 2026-02-20 08:10:34.613745688 +0000 UTC m=+0.134495666 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, release=1766032510, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:10:34 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:10:34 localhost podman[74388]: 2026-02-20 08:10:34.657650402 +0000 UTC m=+0.295212714 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible) Feb 20 03:10:34 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:10:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:10:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:10:34 localhost podman[74445]: 2026-02-20 08:10:34.776051637 +0000 UTC m=+0.082981054 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-cron-container, tcib_managed=true, config_id=tripleo_step4, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:10:34 localhost podman[74445]: 2026-02-20 08:10:34.788769648 +0000 UTC m=+0.095699055 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z) Feb 20 03:10:34 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:10:34 localhost podman[74462]: 2026-02-20 08:10:34.864715007 +0000 UTC m=+0.078961509 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=) Feb 20 03:10:34 localhost podman[74462]: 2026-02-20 08:10:34.91664348 +0000 UTC m=+0.130889992 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, container_name=ceilometer_agent_compute, io.openshift.expose-services=, release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:10:34 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:10:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:10:36 localhost podman[74492]: 2026-02-20 08:10:36.447868689 +0000 UTC m=+0.088670932 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:10:36 localhost podman[74492]: 2026-02-20 08:10:36.828163967 +0000 UTC m=+0.468966150 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible) Feb 20 03:10:36 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:10:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:10:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:10:39 localhost systemd[1]: tmp-crun.6EVMKj.mount: Deactivated successfully. Feb 20 03:10:39 localhost podman[74516]: 2026-02-20 08:10:39.453850403 +0000 UTC m=+0.096386833 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 20 03:10:39 localhost systemd[1]: tmp-crun.nQh2GF.mount: Deactivated successfully. Feb 20 03:10:39 localhost podman[74517]: 2026-02-20 08:10:39.497463089 +0000 UTC m=+0.137347710 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, vcs-type=git, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:10:39 localhost podman[74517]: 2026-02-20 08:10:39.519109464 +0000 UTC m=+0.158994035 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:10:39 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:10:39 localhost podman[74516]: 2026-02-20 08:10:39.553095749 +0000 UTC m=+0.195632169 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64, container_name=ovn_metadata_agent, distribution-scope=public, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 20 03:10:39 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:10:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:10:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:10:40 localhost podman[74565]: 2026-02-20 08:10:40.436429125 +0000 UTC m=+0.079700277 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_id=tripleo_step3) Feb 20 03:10:40 localhost podman[74565]: 2026-02-20 08:10:40.474867767 +0000 UTC m=+0.118138949 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=, release=1766032510, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:10:40 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:10:40 localhost podman[74566]: 2026-02-20 08:10:40.492078505 +0000 UTC m=+0.130528941 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, architecture=x86_64, distribution-scope=public, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-type=git) Feb 20 03:10:40 localhost podman[74566]: 2026-02-20 08:10:40.526824841 +0000 UTC m=+0.165275297 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, managed_by=tripleo_ansible, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Feb 20 03:10:40 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:10:45 localhost sshd[74605]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:10:47 localhost snmpd[69161]: empty variable list in _query Feb 20 03:10:47 localhost snmpd[69161]: empty variable list in _query Feb 20 03:10:54 localhost sshd[74607]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:11:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:11:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:11:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:11:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:11:05 localhost systemd[1]: tmp-crun.DVNK12.mount: Deactivated successfully. Feb 20 03:11:05 localhost podman[74611]: 2026-02-20 08:11:05.468204146 +0000 UTC m=+0.097319537 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T22:10:15Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, release=1766032510, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:11:05 localhost systemd[1]: tmp-crun.R7Q9FG.mount: Deactivated successfully. Feb 20 03:11:05 localhost podman[74610]: 2026-02-20 08:11:05.516845773 +0000 UTC m=+0.147937765 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20260112.1, release=1766032510, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13) Feb 20 03:11:05 localhost podman[74611]: 2026-02-20 08:11:05.531059014 +0000 UTC m=+0.160174425 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., release=1766032510, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:11:05 localhost podman[74609]: 2026-02-20 08:11:05.568416828 +0000 UTC m=+0.201793759 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, batch=17.1_20260112.1, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:11:05 localhost podman[74610]: 2026-02-20 08:11:05.575774649 +0000 UTC m=+0.206866611 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1766032510, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:11:05 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:11:05 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:11:05 localhost podman[74612]: 2026-02-20 08:11:05.709511884 +0000 UTC m=+0.332933557 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510) Feb 20 03:11:05 localhost podman[74612]: 2026-02-20 08:11:05.765833531 +0000 UTC m=+0.389255224 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, release=1766032510, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5) Feb 20 03:11:05 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:11:05 localhost podman[74609]: 2026-02-20 08:11:05.820374403 +0000 UTC m=+0.453751344 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-type=git, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 20 03:11:05 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:11:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:11:07 localhost systemd[1]: tmp-crun.NDLG5P.mount: Deactivated successfully. Feb 20 03:11:07 localhost podman[74765]: 2026-02-20 08:11:07.428446724 +0000 UTC m=+0.070997642 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, distribution-scope=public) Feb 20 03:11:07 localhost podman[74765]: 2026-02-20 08:11:07.776750059 +0000 UTC m=+0.419300997 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:11:07 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:11:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:11:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:11:10 localhost podman[74809]: 2026-02-20 08:11:10.4379412 +0000 UTC m=+0.076329329 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64) Feb 20 03:11:10 localhost podman[74810]: 2026-02-20 08:11:10.49816696 +0000 UTC m=+0.136161289 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller) Feb 20 03:11:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:11:10 localhost podman[74809]: 2026-02-20 08:11:10.506842186 +0000 UTC m=+0.145230325 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, version=17.1.13, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4) Feb 20 03:11:10 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:11:10 localhost podman[74810]: 2026-02-20 08:11:10.569721334 +0000 UTC m=+0.207715673 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, version=17.1.13, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510) Feb 20 03:11:10 localhost systemd[1]: tmp-crun.SZMPjq.mount: Deactivated successfully. Feb 20 03:11:10 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:11:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:11:10 localhost podman[74851]: 2026-02-20 08:11:10.574710704 +0000 UTC m=+0.061388571 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.13, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, container_name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:11:10 localhost podman[74851]: 2026-02-20 08:11:10.654355199 +0000 UTC m=+0.141032996 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13) Feb 20 03:11:10 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:11:10 localhost podman[74874]: 2026-02-20 08:11:10.739352534 +0000 UTC m=+0.133394647 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, release=1766032510, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Feb 20 03:11:10 localhost podman[74874]: 2026-02-20 08:11:10.748083841 +0000 UTC m=+0.142125954 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.5, container_name=collectd, batch=17.1_20260112.1, release=1766032510, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:11:10 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:11:24 localhost sshd[74894]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:11:33 localhost sshd[74897]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:11:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:11:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:11:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:11:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:11:36 localhost podman[74900]: 2026-02-20 08:11:36.460976137 +0000 UTC m=+0.092996564 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.13, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute) Feb 20 03:11:36 localhost podman[74900]: 2026-02-20 08:11:36.491998786 +0000 UTC m=+0.124019213 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, config_id=tripleo_step4) Feb 20 03:11:36 localhost systemd[1]: tmp-crun.84pb8H.mount: Deactivated successfully. Feb 20 03:11:36 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:11:36 localhost podman[74901]: 2026-02-20 08:11:36.514004659 +0000 UTC m=+0.143500060 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.buildah.version=1.41.5, container_name=logrotate_crond, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, release=1766032510) Feb 20 03:11:36 localhost podman[74901]: 2026-02-20 08:11:36.524812601 +0000 UTC m=+0.154307922 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, distribution-scope=public, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, maintainer=OpenStack TripleO Team) Feb 20 03:11:36 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:11:36 localhost podman[74899]: 2026-02-20 08:11:36.609352833 +0000 UTC m=+0.247015697 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=metrics_qdr, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, build-date=2026-01-12T22:10:14Z) Feb 20 03:11:36 localhost podman[74902]: 2026-02-20 08:11:36.667569081 +0000 UTC m=+0.295387888 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, tcib_managed=true) Feb 20 03:11:36 localhost podman[74902]: 2026-02-20 08:11:36.700792576 +0000 UTC m=+0.328611403 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, version=17.1.13, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 20 03:11:36 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:11:36 localhost podman[74899]: 2026-02-20 08:11:36.836918924 +0000 UTC m=+0.474581788 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1766032510, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Feb 20 03:11:36 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:11:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:11:38 localhost podman[74995]: 2026-02-20 08:11:38.448468585 +0000 UTC m=+0.084941394 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com) Feb 20 03:11:38 localhost podman[74995]: 2026-02-20 08:11:38.814843301 +0000 UTC m=+0.451316110 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, release=1766032510) Feb 20 03:11:38 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:11:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:11:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:11:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:11:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:11:41 localhost systemd[1]: tmp-crun.kS1qtY.mount: Deactivated successfully. Feb 20 03:11:41 localhost podman[75022]: 2026-02-20 08:11:41.460819166 +0000 UTC m=+0.088910548 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, container_name=collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com) Feb 20 03:11:41 localhost podman[75021]: 2026-02-20 08:11:41.511437394 +0000 UTC m=+0.142414572 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 20 03:11:41 localhost podman[75019]: 2026-02-20 08:11:41.555912693 +0000 UTC m=+0.192095425 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, tcib_managed=true) Feb 20 03:11:41 localhost podman[75022]: 2026-02-20 08:11:41.570059232 +0000 UTC m=+0.198150564 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step3, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, name=rhosp-rhel9/openstack-collectd, container_name=collectd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13) Feb 20 03:11:41 localhost podman[75021]: 2026-02-20 08:11:41.584238621 +0000 UTC m=+0.215215809 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:11:41 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:11:41 localhost podman[75019]: 2026-02-20 08:11:41.598314458 +0000 UTC m=+0.234497150 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team) Feb 20 03:11:41 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:11:41 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:11:41 localhost podman[75020]: 2026-02-20 08:11:41.648858175 +0000 UTC m=+0.281273570 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., distribution-scope=public, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step3, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1) Feb 20 03:11:41 localhost podman[75020]: 2026-02-20 08:11:41.660733365 +0000 UTC m=+0.293148730 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, release=1766032510, managed_by=tripleo_ansible, container_name=iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13) Feb 20 03:11:41 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:12:04 localhost sshd[75105]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:12:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:12:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:12:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:12:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:12:07 localhost systemd[1]: tmp-crun.LgT38d.mount: Deactivated successfully. Feb 20 03:12:07 localhost podman[75112]: 2026-02-20 08:12:07.436664902 +0000 UTC m=+0.067454258 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron) Feb 20 03:12:07 localhost podman[75108]: 2026-02-20 08:12:07.417432591 +0000 UTC m=+0.055455516 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:12:07 localhost podman[75108]: 2026-02-20 08:12:07.498793441 +0000 UTC m=+0.136816386 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, container_name=ceilometer_agent_compute, config_id=tripleo_step4) Feb 20 03:12:07 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:12:07 localhost podman[75107]: 2026-02-20 08:12:07.465793041 +0000 UTC m=+0.106879486 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, container_name=metrics_qdr, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com) Feb 20 03:12:07 localhost podman[75118]: 2026-02-20 08:12:07.597039711 +0000 UTC m=+0.225395995 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.13, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, distribution-scope=public, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:12:07 localhost podman[75112]: 2026-02-20 08:12:07.62080107 +0000 UTC m=+0.251590496 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, batch=17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, version=17.1.13, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z) Feb 20 03:12:07 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:12:07 localhost podman[75118]: 2026-02-20 08:12:07.651781757 +0000 UTC m=+0.280138061 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, io.buildah.version=1.41.5, version=17.1.13, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Feb 20 03:12:07 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:12:07 localhost podman[75107]: 2026-02-20 08:12:07.682158088 +0000 UTC m=+0.323244573 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, batch=17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, build-date=2026-01-12T22:10:14Z, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:12:07 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:12:09 localhost systemd[1]: tmp-crun.OwoyOH.mount: Deactivated successfully. Feb 20 03:12:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:12:09 localhost podman[75310]: 2026-02-20 08:12:09.077233399 +0000 UTC m=+0.086728141 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, release=1770267347, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, ceph=True, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=7, CEPH_POINT_RELEASE=) Feb 20 03:12:09 localhost podman[75329]: 2026-02-20 08:12:09.183618261 +0000 UTC m=+0.096972407 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container) Feb 20 03:12:09 localhost podman[75310]: 2026-02-20 08:12:09.185585903 +0000 UTC m=+0.195080675 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , vcs-type=git, release=1770267347) Feb 20 03:12:09 localhost systemd[1]: tmp-crun.ylSWia.mount: Deactivated successfully. Feb 20 03:12:09 localhost podman[75329]: 2026-02-20 08:12:09.545457819 +0000 UTC m=+0.458811965 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:12:09 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:12:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:12:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:12:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:12:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:12:12 localhost podman[75477]: 2026-02-20 08:12:12.500007265 +0000 UTC m=+0.121177338 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 20 03:12:12 localhost podman[75476]: 2026-02-20 08:12:12.523901658 +0000 UTC m=+0.148010128 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.buildah.version=1.41.5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, batch=17.1_20260112.1, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 20 03:12:12 localhost podman[75476]: 2026-02-20 08:12:12.532229055 +0000 UTC m=+0.156337485 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, release=1766032510, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Feb 20 03:12:12 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:12:12 localhost podman[75477]: 2026-02-20 08:12:12.586825267 +0000 UTC m=+0.207995290 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, release=1766032510, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 20 03:12:12 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:12:12 localhost podman[75475]: 2026-02-20 08:12:12.636867921 +0000 UTC m=+0.262520982 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13) Feb 20 03:12:12 localhost podman[75475]: 2026-02-20 08:12:12.680157059 +0000 UTC m=+0.305810110 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, batch=17.1_20260112.1, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 20 03:12:12 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:12:12 localhost podman[75478]: 2026-02-20 08:12:12.681770061 +0000 UTC m=+0.300410648 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:12:12 localhost podman[75478]: 2026-02-20 08:12:12.761288453 +0000 UTC m=+0.379929020 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, container_name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:12:12 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:12:18 localhost sshd[75561]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:12:19 localhost sshd[75563]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:12:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:12:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:12:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:12:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:12:38 localhost podman[75566]: 2026-02-20 08:12:38.45494982 +0000 UTC m=+0.092244215 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, build-date=2026-01-12T23:07:47Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, version=17.1.13) Feb 20 03:12:38 localhost podman[75566]: 2026-02-20 08:12:38.485132127 +0000 UTC m=+0.122426522 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64) Feb 20 03:12:38 localhost podman[75567]: 2026-02-20 08:12:38.500155268 +0000 UTC m=+0.134397473 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, architecture=x86_64, io.buildah.version=1.41.5, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, tcib_managed=true, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:12:38 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:12:38 localhost podman[75565]: 2026-02-20 08:12:38.565584283 +0000 UTC m=+0.205107706 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc.) Feb 20 03:12:38 localhost podman[75571]: 2026-02-20 08:12:38.611974121 +0000 UTC m=+0.239544022 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1766032510, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, io.buildah.version=1.41.5, architecture=x86_64, version=17.1.13, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true) Feb 20 03:12:38 localhost podman[75567]: 2026-02-20 08:12:38.639647563 +0000 UTC m=+0.273889748 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp-rhel9/openstack-cron, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, build-date=2026-01-12T22:10:15Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=) Feb 20 03:12:38 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:12:38 localhost podman[75571]: 2026-02-20 08:12:38.665795034 +0000 UTC m=+0.293364865 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1766032510, tcib_managed=true, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:12:38 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:12:38 localhost podman[75565]: 2026-02-20 08:12:38.785047341 +0000 UTC m=+0.424570714 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:12:38 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:12:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:12:40 localhost podman[75669]: 2026-02-20 08:12:40.443916925 +0000 UTC m=+0.083111967 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, version=17.1.13, url=https://www.redhat.com, config_id=tripleo_step4, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.openshift.expose-services=) Feb 20 03:12:40 localhost podman[75669]: 2026-02-20 08:12:40.79372032 +0000 UTC m=+0.432915362 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, container_name=nova_migration_target, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, version=17.1.13, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:12:40 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:12:43 localhost sshd[75693]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:12:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:12:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:12:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:12:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:12:43 localhost systemd[1]: tmp-crun.7GlUon.mount: Deactivated successfully. Feb 20 03:12:43 localhost podman[75697]: 2026-02-20 08:12:43.453522985 +0000 UTC m=+0.086043012 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, version=17.1.13, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.buildah.version=1.41.5, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 20 03:12:43 localhost podman[75696]: 2026-02-20 08:12:43.498662442 +0000 UTC m=+0.134003783 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., release=1766032510, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, build-date=2026-01-12T22:34:43Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:12:43 localhost podman[75697]: 2026-02-20 08:12:43.504893124 +0000 UTC m=+0.137413111 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Feb 20 03:12:43 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:12:43 localhost podman[75698]: 2026-02-20 08:12:43.552828753 +0000 UTC m=+0.179545869 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step3, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, build-date=2026-01-12T22:10:15Z, vcs-type=git, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:12:43 localhost podman[75696]: 2026-02-20 08:12:43.557220068 +0000 UTC m=+0.192561399 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step3, release=1766032510, vcs-type=git, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13) Feb 20 03:12:43 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:12:43 localhost podman[75695]: 2026-02-20 08:12:43.605691911 +0000 UTC m=+0.242821199 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, version=17.1.13, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public) Feb 20 03:12:43 localhost podman[75698]: 2026-02-20 08:12:43.636011291 +0000 UTC m=+0.262728387 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=1766032510, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5) Feb 20 03:12:43 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:12:43 localhost podman[75695]: 2026-02-20 08:12:43.656759401 +0000 UTC m=+0.293888679 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:12:43 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:12:44 localhost sshd[75783]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:12:57 localhost sshd[75785]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:13:06 localhost sshd[75787]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:13:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:13:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:13:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:13:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:13:09 localhost podman[75792]: 2026-02-20 08:13:09.435111865 +0000 UTC m=+0.064129513 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:13:09 localhost systemd[1]: tmp-crun.edZubJ.mount: Deactivated successfully. Feb 20 03:13:09 localhost podman[75792]: 2026-02-20 08:13:09.486702788 +0000 UTC m=+0.115720446 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, release=1766032510, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:13:09 localhost systemd[1]: tmp-crun.XmnpK7.mount: Deactivated successfully. Feb 20 03:13:09 localhost podman[75790]: 2026-02-20 08:13:09.500531979 +0000 UTC m=+0.129393562 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, architecture=x86_64, container_name=ceilometer_agent_compute, url=https://www.redhat.com, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1) Feb 20 03:13:09 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:13:09 localhost podman[75789]: 2026-02-20 08:13:09.535402418 +0000 UTC m=+0.167449134 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, url=https://www.redhat.com, architecture=x86_64, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team) Feb 20 03:13:09 localhost podman[75791]: 2026-02-20 08:13:09.545886432 +0000 UTC m=+0.171077469 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, version=17.1.13) Feb 20 03:13:09 localhost podman[75791]: 2026-02-20 08:13:09.552043942 +0000 UTC m=+0.177234949 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, release=1766032510, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git) Feb 20 03:13:09 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:13:09 localhost podman[75790]: 2026-02-20 08:13:09.602726062 +0000 UTC m=+0.231587625 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, container_name=ceilometer_agent_compute, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:13:09 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:13:09 localhost podman[75789]: 2026-02-20 08:13:09.717291518 +0000 UTC m=+0.349338294 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:13:09 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:13:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:13:11 localhost systemd[1]: tmp-crun.DBpWeF.mount: Deactivated successfully. Feb 20 03:13:11 localhost podman[75899]: 2026-02-20 08:13:11.327763881 +0000 UTC m=+0.084332298 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step4, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, batch=17.1_20260112.1) Feb 20 03:13:11 localhost podman[75899]: 2026-02-20 08:13:11.726074859 +0000 UTC m=+0.482643276 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:13:11 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:13:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:13:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:13:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:13:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:13:14 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:13:14 localhost recover_tripleo_nova_virtqemud[76008]: 63703 Feb 20 03:13:14 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:13:14 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:13:14 localhost systemd[1]: tmp-crun.PJwvSK.mount: Deactivated successfully. Feb 20 03:13:14 localhost podman[75982]: 2026-02-20 08:13:14.455731094 +0000 UTC m=+0.094745980 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, url=https://www.redhat.com, version=17.1.13, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Feb 20 03:13:14 localhost podman[75983]: 2026-02-20 08:13:14.512188535 +0000 UTC m=+0.149663790 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Feb 20 03:13:14 localhost podman[75982]: 2026-02-20 08:13:14.538703207 +0000 UTC m=+0.177718073 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, release=1766032510, architecture=x86_64, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:13:14 localhost podman[75983]: 2026-02-20 08:13:14.546980022 +0000 UTC m=+0.184455277 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:13:14 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:13:14 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:13:14 localhost podman[75984]: 2026-02-20 08:13:14.598180846 +0000 UTC m=+0.231286577 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, release=1766032510, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, distribution-scope=public, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, managed_by=tripleo_ansible) Feb 20 03:13:14 localhost podman[75985]: 2026-02-20 08:13:14.551640053 +0000 UTC m=+0.181048128 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, distribution-scope=public, build-date=2026-01-12T22:10:15Z, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:13:14 localhost podman[75985]: 2026-02-20 08:13:14.633828505 +0000 UTC m=+0.263236570 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, container_name=collectd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, architecture=x86_64, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, url=https://www.redhat.com) Feb 20 03:13:14 localhost podman[75984]: 2026-02-20 08:13:14.644317469 +0000 UTC m=+0.277423200 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:13:14 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:13:14 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:13:16 localhost python3[76113]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:13:16 localhost python3[76158]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575196.2073479-114043-103517509705381/source _original_basename=tmpy61wia50 follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:13:17 localhost python3[76188]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:13:18 localhost sshd[76257]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:13:19 localhost ansible-async_wrapper.py[76362]: Invoked with 485738470450 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575198.7238245-114207-13426678721137/AnsiballZ_command.py _ Feb 20 03:13:19 localhost ansible-async_wrapper.py[76365]: Starting module and watcher Feb 20 03:13:19 localhost ansible-async_wrapper.py[76365]: Start watching 76366 (3600) Feb 20 03:13:19 localhost ansible-async_wrapper.py[76366]: Start module (76366) Feb 20 03:13:19 localhost ansible-async_wrapper.py[76362]: Return async_wrapper task started. Feb 20 03:13:19 localhost python3[76383]: ansible-ansible.legacy.async_status Invoked with jid=485738470450.76362 mode=status _async_dir=/tmp/.ansible_async Feb 20 03:13:23 localhost puppet-user[76386]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Feb 20 03:13:23 localhost puppet-user[76386]: (file: /etc/puppet/hiera.yaml) Feb 20 03:13:23 localhost puppet-user[76386]: Warning: Undefined variable '::deploy_config_name'; Feb 20 03:13:23 localhost puppet-user[76386]: (file & line not available) Feb 20 03:13:23 localhost puppet-user[76386]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Feb 20 03:13:23 localhost puppet-user[76386]: (file & line not available) Feb 20 03:13:23 localhost puppet-user[76386]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Feb 20 03:13:23 localhost puppet-user[76386]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 20 03:13:23 localhost puppet-user[76386]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 20 03:13:23 localhost puppet-user[76386]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 20 03:13:23 localhost puppet-user[76386]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 20 03:13:23 localhost puppet-user[76386]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 20 03:13:23 localhost puppet-user[76386]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 20 03:13:23 localhost puppet-user[76386]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 20 03:13:23 localhost puppet-user[76386]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 20 03:13:23 localhost puppet-user[76386]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 20 03:13:23 localhost puppet-user[76386]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 20 03:13:23 localhost puppet-user[76386]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 20 03:13:23 localhost puppet-user[76386]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 20 03:13:23 localhost puppet-user[76386]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 20 03:13:23 localhost puppet-user[76386]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 20 03:13:23 localhost puppet-user[76386]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Feb 20 03:13:23 localhost puppet-user[76386]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Feb 20 03:13:23 localhost puppet-user[76386]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Feb 20 03:13:23 localhost puppet-user[76386]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Feb 20 03:13:23 localhost puppet-user[76386]: Notice: Compiled catalog for np0005625202.localdomain in environment production in 0.23 seconds Feb 20 03:13:23 localhost puppet-user[76386]: Notice: Applied catalog in 0.21 seconds Feb 20 03:13:23 localhost puppet-user[76386]: Application: Feb 20 03:13:23 localhost puppet-user[76386]: Initial environment: production Feb 20 03:13:23 localhost puppet-user[76386]: Converged environment: production Feb 20 03:13:23 localhost puppet-user[76386]: Run mode: user Feb 20 03:13:23 localhost puppet-user[76386]: Changes: Feb 20 03:13:23 localhost puppet-user[76386]: Events: Feb 20 03:13:23 localhost puppet-user[76386]: Resources: Feb 20 03:13:23 localhost puppet-user[76386]: Total: 19 Feb 20 03:13:23 localhost puppet-user[76386]: Time: Feb 20 03:13:23 localhost puppet-user[76386]: Package: 0.00 Feb 20 03:13:23 localhost puppet-user[76386]: Schedule: 0.00 Feb 20 03:13:23 localhost puppet-user[76386]: Exec: 0.01 Feb 20 03:13:23 localhost puppet-user[76386]: Augeas: 0.01 Feb 20 03:13:23 localhost puppet-user[76386]: File: 0.02 Feb 20 03:13:23 localhost puppet-user[76386]: Service: 0.05 Feb 20 03:13:23 localhost puppet-user[76386]: Transaction evaluation: 0.19 Feb 20 03:13:23 localhost puppet-user[76386]: Catalog application: 0.21 Feb 20 03:13:23 localhost puppet-user[76386]: Config retrieval: 0.29 Feb 20 03:13:23 localhost puppet-user[76386]: Last run: 1771575203 Feb 20 03:13:23 localhost puppet-user[76386]: Filebucket: 0.00 Feb 20 03:13:23 localhost puppet-user[76386]: Total: 0.21 Feb 20 03:13:23 localhost puppet-user[76386]: Version: Feb 20 03:13:23 localhost puppet-user[76386]: Config: 1771575203 Feb 20 03:13:23 localhost puppet-user[76386]: Puppet: 7.10.0 Feb 20 03:13:23 localhost ansible-async_wrapper.py[76366]: Module complete (76366) Feb 20 03:13:24 localhost ansible-async_wrapper.py[76365]: Done in kid B. Feb 20 03:13:26 localhost sshd[76509]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:13:29 localhost python3[76526]: ansible-ansible.legacy.async_status Invoked with jid=485738470450.76362 mode=status _async_dir=/tmp/.ansible_async Feb 20 03:13:30 localhost python3[76542]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 03:13:30 localhost python3[76558]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:13:31 localhost python3[76608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:13:31 localhost python3[76626]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpzerul1kp recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Feb 20 03:13:32 localhost python3[76656]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:13:33 localhost python3[76761]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Feb 20 03:13:33 localhost python3[76780]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:13:34 localhost python3[76812]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:13:35 localhost python3[76862]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:13:35 localhost sshd[76865]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:13:35 localhost python3[76882]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:13:36 localhost python3[76944]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:13:36 localhost python3[76962]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:13:36 localhost python3[77024]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:13:37 localhost python3[77042]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:13:37 localhost python3[77104]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:13:37 localhost python3[77122]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:13:38 localhost python3[77152]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:13:38 localhost systemd[1]: Reloading. Feb 20 03:13:38 localhost systemd-rc-local-generator[77173]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:13:38 localhost systemd-sysv-generator[77178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:13:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:13:39 localhost python3[77238]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:13:39 localhost python3[77256]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:13:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:13:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:13:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:13:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:13:40 localhost systemd[1]: tmp-crun.jYvejQ.mount: Deactivated successfully. Feb 20 03:13:40 localhost podman[77319]: 2026-02-20 08:13:40.139227835 +0000 UTC m=+0.094636446 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 03:13:40 localhost podman[77322]: 2026-02-20 08:13:40.181630611 +0000 UTC m=+0.126812096 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, tcib_managed=true, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 20 03:13:40 localhost python3[77318]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Feb 20 03:13:40 localhost podman[77320]: 2026-02-20 08:13:40.233947264 +0000 UTC m=+0.185531726 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1766032510, version=17.1.13, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com) Feb 20 03:13:40 localhost podman[77322]: 2026-02-20 08:13:40.241953522 +0000 UTC m=+0.187134957 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, release=1766032510, version=17.1.13, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:13:40 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:13:40 localhost podman[77321]: 2026-02-20 08:13:40.289430449 +0000 UTC m=+0.238740951 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1766032510, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:13:40 localhost podman[77320]: 2026-02-20 08:13:40.295503488 +0000 UTC m=+0.247087920 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13) Feb 20 03:13:40 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:13:40 localhost podman[77319]: 2026-02-20 08:13:40.331440174 +0000 UTC m=+0.286848775 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, version=17.1.13, release=1766032510, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible) Feb 20 03:13:40 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:13:40 localhost podman[77321]: 2026-02-20 08:13:40.37273132 +0000 UTC m=+0.322041782 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, release=1766032510, container_name=logrotate_crond, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:13:40 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:13:40 localhost python3[77431]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:13:41 localhost python3[77462]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:13:41 localhost systemd[1]: Reloading. Feb 20 03:13:41 localhost systemd-rc-local-generator[77485]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:13:41 localhost systemd-sysv-generator[77489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:13:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:13:41 localhost systemd[1]: Starting Create netns directory... Feb 20 03:13:41 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 20 03:13:41 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 20 03:13:41 localhost systemd[1]: Finished Create netns directory. Feb 20 03:13:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:13:41 localhost podman[77519]: 2026-02-20 08:13:41.87638235 +0000 UTC m=+0.086303379 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, version=17.1.13, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:13:41 localhost python3[77520]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Feb 20 03:13:42 localhost podman[77519]: 2026-02-20 08:13:42.182375153 +0000 UTC m=+0.392296152 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:13:42 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:13:43 localhost python3[77601]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Feb 20 03:13:44 localhost podman[77640]: 2026-02-20 08:13:44.013518096 +0000 UTC m=+0.079437250 container create d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, container_name=nova_compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z) Feb 20 03:13:44 localhost systemd[1]: Started libpod-conmon-d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.scope. Feb 20 03:13:44 localhost podman[77640]: 2026-02-20 08:13:43.969699185 +0000 UTC m=+0.035618399 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 20 03:13:44 localhost systemd[1]: Started libcrun container. Feb 20 03:13:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e52a870f1c03a40a0dd0950b6a82c92b30d867e7a1a54958bb9761473a110d/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:13:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e52a870f1c03a40a0dd0950b6a82c92b30d867e7a1a54958bb9761473a110d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 03:13:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e52a870f1c03a40a0dd0950b6a82c92b30d867e7a1a54958bb9761473a110d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:13:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e52a870f1c03a40a0dd0950b6a82c92b30d867e7a1a54958bb9761473a110d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 20 03:13:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e52a870f1c03a40a0dd0950b6a82c92b30d867e7a1a54958bb9761473a110d/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Feb 20 03:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:13:44 localhost podman[77640]: 2026-02-20 08:13:44.139749295 +0000 UTC m=+0.205668449 container init d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z) Feb 20 03:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:13:44 localhost podman[77640]: 2026-02-20 08:13:44.178808693 +0000 UTC m=+0.244727837 container start d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:13:44 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Feb 20 03:13:44 localhost python3[77601]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381 --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 20 03:13:44 localhost systemd[1]: Created slice User Slice of UID 0. Feb 20 03:13:44 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Feb 20 03:13:44 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Feb 20 03:13:44 localhost systemd[1]: Starting User Manager for UID 0... Feb 20 03:13:44 localhost podman[77661]: 2026-02-20 08:13:44.261879018 +0000 UTC m=+0.075044127 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5) Feb 20 03:13:44 localhost podman[77661]: 2026-02-20 08:13:44.30764666 +0000 UTC m=+0.120811799 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 20 03:13:44 localhost podman[77661]: unhealthy Feb 20 03:13:44 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:13:44 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:13:44 localhost systemd[77679]: Queued start job for default target Main User Target. Feb 20 03:13:44 localhost systemd[77679]: Created slice User Application Slice. Feb 20 03:13:44 localhost systemd[77679]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Feb 20 03:13:44 localhost systemd[77679]: Started Daily Cleanup of User's Temporary Directories. Feb 20 03:13:44 localhost systemd[77679]: Reached target Paths. Feb 20 03:13:44 localhost systemd[77679]: Reached target Timers. Feb 20 03:13:44 localhost systemd[77679]: Starting D-Bus User Message Bus Socket... Feb 20 03:13:44 localhost systemd[77679]: Starting Create User's Volatile Files and Directories... Feb 20 03:13:44 localhost systemd[77679]: Listening on D-Bus User Message Bus Socket. Feb 20 03:13:44 localhost systemd[77679]: Reached target Sockets. Feb 20 03:13:44 localhost systemd[77679]: Finished Create User's Volatile Files and Directories. Feb 20 03:13:44 localhost systemd[77679]: Reached target Basic System. Feb 20 03:13:44 localhost systemd[77679]: Reached target Main User Target. Feb 20 03:13:44 localhost systemd[77679]: Startup finished in 142ms. Feb 20 03:13:44 localhost systemd[1]: Started User Manager for UID 0. Feb 20 03:13:44 localhost systemd[1]: Started Session c10 of User root. Feb 20 03:13:44 localhost systemd[1]: session-c10.scope: Deactivated successfully. Feb 20 03:13:44 localhost podman[77763]: 2026-02-20 08:13:44.685913796 +0000 UTC m=+0.085868507 container create 4896fce42af9dd7b893ebb1c64ae9ab3636bab632c1f87ad547969ba6cbff4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_wait_for_compute_service, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team) Feb 20 03:13:44 localhost podman[77763]: 2026-02-20 08:13:44.638546632 +0000 UTC m=+0.038501383 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 20 03:13:44 localhost systemd[1]: Started libpod-conmon-4896fce42af9dd7b893ebb1c64ae9ab3636bab632c1f87ad547969ba6cbff4f6.scope. Feb 20 03:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:13:44 localhost systemd[1]: Started libcrun container. Feb 20 03:13:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8406dbd1199970186b7c17d6e01204ade90f840cc20a00e264f81a8db7e3fd/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Feb 20 03:13:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8406dbd1199970186b7c17d6e01204ade90f840cc20a00e264f81a8db7e3fd/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Feb 20 03:13:44 localhost podman[77763]: 2026-02-20 08:13:44.782001271 +0000 UTC m=+0.181955972 container init 4896fce42af9dd7b893ebb1c64ae9ab3636bab632c1f87ad547969ba6cbff4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, container_name=nova_wait_for_compute_service, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5) Feb 20 03:13:44 localhost podman[77763]: 2026-02-20 08:13:44.795932774 +0000 UTC m=+0.195887495 container start 4896fce42af9dd7b893ebb1c64ae9ab3636bab632c1f87ad547969ba6cbff4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=nova_wait_for_compute_service, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13) Feb 20 03:13:44 localhost podman[77763]: 2026-02-20 08:13:44.797484924 +0000 UTC m=+0.197439695 container attach 4896fce42af9dd7b893ebb1c64ae9ab3636bab632c1f87ad547969ba6cbff4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, container_name=nova_wait_for_compute_service, version=17.1.13, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z) Feb 20 03:13:44 localhost podman[77783]: 2026-02-20 08:13:44.852479857 +0000 UTC m=+0.088247841 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, container_name=collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:13:44 localhost podman[77783]: 2026-02-20 08:13:44.859444258 +0000 UTC m=+0.095212262 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, version=17.1.13, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z) Feb 20 03:13:44 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:13:44 localhost podman[77781]: 2026-02-20 08:13:44.896296179 +0000 UTC m=+0.135962504 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1766032510, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, batch=17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=) Feb 20 03:13:44 localhost podman[77781]: 2026-02-20 08:13:44.919628467 +0000 UTC m=+0.159294812 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_controller, batch=17.1_20260112.1, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510) Feb 20 03:13:44 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:13:44 localhost podman[77780]: 2026-02-20 08:13:44.949850425 +0000 UTC m=+0.192226211 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, release=1766032510, managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, io.openshift.expose-services=, version=17.1.13, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64) Feb 20 03:13:45 localhost podman[77778]: 2026-02-20 08:13:45.00648399 +0000 UTC m=+0.250664313 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20260112.1, version=17.1.13, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, distribution-scope=public, container_name=ovn_metadata_agent, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4) Feb 20 03:13:45 localhost podman[77778]: 2026-02-20 08:13:45.043653538 +0000 UTC m=+0.287833881 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 20 03:13:45 localhost podman[77780]: 2026-02-20 08:13:45.038622457 +0000 UTC m=+0.280998253 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, architecture=x86_64, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, io.buildah.version=1.41.5, version=17.1.13, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, url=https://www.redhat.com, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team) Feb 20 03:13:45 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:13:45 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:13:46 localhost sshd[77869]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:13:54 localhost systemd[1]: Stopping User Manager for UID 0... Feb 20 03:13:54 localhost systemd[77679]: Activating special unit Exit the Session... Feb 20 03:13:54 localhost systemd[77679]: Stopped target Main User Target. Feb 20 03:13:54 localhost systemd[77679]: Stopped target Basic System. Feb 20 03:13:54 localhost systemd[77679]: Stopped target Paths. Feb 20 03:13:54 localhost systemd[77679]: Stopped target Sockets. Feb 20 03:13:54 localhost systemd[77679]: Stopped target Timers. Feb 20 03:13:54 localhost systemd[77679]: Stopped Daily Cleanup of User's Temporary Directories. Feb 20 03:13:54 localhost systemd[77679]: Closed D-Bus User Message Bus Socket. Feb 20 03:13:54 localhost systemd[77679]: Stopped Create User's Volatile Files and Directories. Feb 20 03:13:54 localhost systemd[77679]: Removed slice User Application Slice. Feb 20 03:13:54 localhost systemd[77679]: Reached target Shutdown. Feb 20 03:13:54 localhost systemd[77679]: Finished Exit the Session. Feb 20 03:13:54 localhost systemd[77679]: Reached target Exit the Session. Feb 20 03:13:54 localhost systemd[1]: user@0.service: Deactivated successfully. Feb 20 03:13:54 localhost systemd[1]: Stopped User Manager for UID 0. Feb 20 03:13:54 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Feb 20 03:13:54 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Feb 20 03:13:54 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Feb 20 03:13:54 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Feb 20 03:13:54 localhost systemd[1]: Removed slice User Slice of UID 0. Feb 20 03:14:07 localhost sshd[77873]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:14:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:14:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:14:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:14:10 localhost systemd[1]: tmp-crun.0oJcfO.mount: Deactivated successfully. Feb 20 03:14:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:14:10 localhost podman[77875]: 2026-02-20 08:14:10.504210695 +0000 UTC m=+0.145221694 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:14:10 localhost podman[77875]: 2026-02-20 08:14:10.542250988 +0000 UTC m=+0.183261947 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5) Feb 20 03:14:10 localhost podman[77877]: 2026-02-20 08:14:10.467817678 +0000 UTC m=+0.102529156 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, version=17.1.13, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:14:10 localhost podman[77919]: 2026-02-20 08:14:10.579815948 +0000 UTC m=+0.098691691 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, url=https://www.redhat.com) Feb 20 03:14:10 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:14:10 localhost podman[77919]: 2026-02-20 08:14:10.613302877 +0000 UTC m=+0.132178630 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 20 03:14:10 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:14:10 localhost podman[77876]: 2026-02-20 08:14:10.547547841 +0000 UTC m=+0.186369170 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, release=1766032510, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team) Feb 20 03:14:10 localhost podman[77876]: 2026-02-20 08:14:10.681004075 +0000 UTC m=+0.319825424 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 20 03:14:10 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:14:10 localhost podman[77877]: 2026-02-20 08:14:10.718698758 +0000 UTC m=+0.353410176 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, release=1766032510, architecture=x86_64, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5) Feb 20 03:14:10 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:14:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:14:12 localhost podman[77976]: 2026-02-20 08:14:12.431110217 +0000 UTC m=+0.074770441 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:14:12 localhost podman[77976]: 2026-02-20 08:14:12.820819877 +0000 UTC m=+0.464480051 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 20 03:14:12 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:14:14 localhost systemd[1]: tmp-crun.KAVPvu.mount: Deactivated successfully. Feb 20 03:14:14 localhost podman[78076]: 2026-02-20 08:14:14.450372595 +0000 UTC m=+0.088412531 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, vcs-type=git, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, build-date=2026-01-12T23:32:04Z, architecture=x86_64, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Feb 20 03:14:14 localhost podman[78076]: 2026-02-20 08:14:14.539833804 +0000 UTC m=+0.177873760 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, config_id=tripleo_step5, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container) Feb 20 03:14:14 localhost podman[78076]: unhealthy Feb 20 03:14:14 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:14:14 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:14:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:14:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:14:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:14:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:14:15 localhost systemd[1]: tmp-crun.rzBANy.mount: Deactivated successfully. Feb 20 03:14:15 localhost podman[78107]: 2026-02-20 08:14:15.429667911 +0000 UTC m=+0.060111393 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, url=https://www.redhat.com, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:14:15 localhost podman[78107]: 2026-02-20 08:14:15.440610678 +0000 UTC m=+0.071054110 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, container_name=collectd, config_id=tripleo_step3, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:14:15 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:14:15 localhost podman[78100]: 2026-02-20 08:14:15.490735859 +0000 UTC m=+0.127087742 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:14:15 localhost podman[78100]: 2026-02-20 08:14:15.500991597 +0000 UTC m=+0.137343480 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 20 03:14:15 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:14:15 localhost podman[78099]: 2026-02-20 08:14:15.474656042 +0000 UTC m=+0.117471330 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, release=1766032510, batch=17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Feb 20 03:14:15 localhost podman[78101]: 2026-02-20 08:14:15.582828659 +0000 UTC m=+0.219373837 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, architecture=x86_64, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc.) Feb 20 03:14:15 localhost podman[78099]: 2026-02-20 08:14:15.606536413 +0000 UTC m=+0.249351741 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, container_name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public) Feb 20 03:14:15 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:14:15 localhost podman[78101]: 2026-02-20 08:14:15.628884449 +0000 UTC m=+0.265429557 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.5, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1) Feb 20 03:14:15 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:14:31 localhost sshd[78185]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:14:32 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:14:32 localhost recover_tripleo_nova_virtqemud[78188]: 63703 Feb 20 03:14:32 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:14:32 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:14:40 localhost sshd[78189]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:14:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:14:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:14:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:14:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:14:40 localhost podman[78191]: 2026-02-20 08:14:40.87119758 +0000 UTC m=+0.094324512 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:14:40 localhost podman[78193]: 2026-02-20 08:14:40.917946628 +0000 UTC m=+0.132033955 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, container_name=logrotate_crond, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-cron-container) Feb 20 03:14:40 localhost podman[78193]: 2026-02-20 08:14:40.926841641 +0000 UTC m=+0.140928958 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:14:40 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:14:40 localhost systemd[1]: tmp-crun.Mjl5hX.mount: Deactivated successfully. Feb 20 03:14:40 localhost podman[78192]: 2026-02-20 08:14:40.995628107 +0000 UTC m=+0.210494245 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, architecture=x86_64, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.13, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4) Feb 20 03:14:41 localhost podman[78191]: 2026-02-20 08:14:41.072653829 +0000 UTC m=+0.295780761 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_id=tripleo_step1, batch=17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 03:14:41 localhost podman[78199]: 2026-02-20 08:14:41.081234962 +0000 UTC m=+0.292288267 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 20 03:14:41 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:14:41 localhost podman[78192]: 2026-02-20 08:14:41.09959533 +0000 UTC m=+0.314461408 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, version=17.1.13, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:14:41 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:14:41 localhost podman[78199]: 2026-02-20 08:14:41.136691337 +0000 UTC m=+0.347744602 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com) Feb 20 03:14:41 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:14:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:14:43 localhost podman[78290]: 2026-02-20 08:14:43.441496548 +0000 UTC m=+0.080188847 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 20 03:14:43 localhost podman[78290]: 2026-02-20 08:14:43.8077 +0000 UTC m=+0.446392289 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, distribution-scope=public, build-date=2026-01-12T23:32:04Z, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, container_name=nova_migration_target, io.buildah.version=1.41.5, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:14:43 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:14:45 localhost systemd[1]: tmp-crun.qso0oq.mount: Deactivated successfully. Feb 20 03:14:45 localhost podman[78313]: 2026-02-20 08:14:45.442096161 +0000 UTC m=+0.082234484 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, release=1766032510, container_name=nova_compute, io.buildah.version=1.41.5, config_id=tripleo_step5, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:14:45 localhost podman[78313]: 2026-02-20 08:14:45.52969773 +0000 UTC m=+0.169835973 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, batch=17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, build-date=2026-01-12T23:32:04Z, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:14:45 localhost podman[78313]: unhealthy Feb 20 03:14:45 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:14:45 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:14:45 localhost podman[78335]: 2026-02-20 08:14:45.613405481 +0000 UTC m=+0.085302416 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:14:45 localhost podman[78335]: 2026-02-20 08:14:45.625840939 +0000 UTC m=+0.097737874 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, vendor=Red Hat, Inc., release=1766032510, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:14:45 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:14:45 localhost podman[78336]: 2026-02-20 08:14:45.662601677 +0000 UTC m=+0.131700156 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, release=1766032510, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git) Feb 20 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:14:45 localhost podman[78336]: 2026-02-20 08:14:45.696306382 +0000 UTC m=+0.165404891 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, release=1766032510, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Feb 20 03:14:45 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:14:45 localhost podman[78377]: 2026-02-20 08:14:45.781627079 +0000 UTC m=+0.108793925 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ovn_controller, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1) Feb 20 03:14:45 localhost podman[78377]: 2026-02-20 08:14:45.829151138 +0000 UTC m=+0.156317934 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13) Feb 20 03:14:45 localhost podman[78366]: 2026-02-20 08:14:45.835867861 +0000 UTC m=+0.182348051 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step4, release=1766032510) Feb 20 03:14:45 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:14:45 localhost podman[78366]: 2026-02-20 08:14:45.906039126 +0000 UTC m=+0.252519386 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., batch=17.1_20260112.1, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, url=https://www.redhat.com) Feb 20 03:14:45 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:14:46 localhost systemd[1]: tmp-crun.COipwD.mount: Deactivated successfully. Feb 20 03:14:49 localhost sshd[78425]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:14:55 localhost systemd[1]: session-27.scope: Deactivated successfully. Feb 20 03:14:55 localhost systemd[1]: session-27.scope: Consumed 3.013s CPU time. Feb 20 03:14:55 localhost systemd-logind[760]: Session 27 logged out. Waiting for processes to exit. Feb 20 03:14:55 localhost systemd-logind[760]: Removed session 27. Feb 20 03:15:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:15:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:15:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:15:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:15:11 localhost podman[78428]: 2026-02-20 08:15:11.438766027 +0000 UTC m=+0.076262782 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container) Feb 20 03:15:11 localhost podman[78427]: 2026-02-20 08:15:11.496953946 +0000 UTC m=+0.137736830 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 03:15:11 localhost podman[78428]: 2026-02-20 08:15:11.518685657 +0000 UTC m=+0.156182362 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:15:11 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:15:11 localhost podman[78435]: 2026-02-20 08:15:11.596116908 +0000 UTC m=+0.225834291 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:15:11 localhost podman[78435]: 2026-02-20 08:15:11.652747646 +0000 UTC m=+0.282465049 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.13, distribution-scope=public, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 20 03:15:11 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:15:11 localhost podman[78429]: 2026-02-20 08:15:11.653916318 +0000 UTC m=+0.285946014 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, container_name=logrotate_crond, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13) Feb 20 03:15:11 localhost podman[78427]: 2026-02-20 08:15:11.708771477 +0000 UTC m=+0.349554391 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., architecture=x86_64, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Feb 20 03:15:11 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:15:11 localhost podman[78429]: 2026-02-20 08:15:11.736684625 +0000 UTC m=+0.368714301 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron) Feb 20 03:15:11 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:15:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:15:14 localhost podman[78541]: 2026-02-20 08:15:14.406735031 +0000 UTC m=+0.048106186 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, config_id=tripleo_step4, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:15:14 localhost podman[78541]: 2026-02-20 08:15:14.745727784 +0000 UTC m=+0.387098919 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.5, batch=17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public) Feb 20 03:15:14 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:15:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:15:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:15:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:15:15 localhost systemd[1]: tmp-crun.HdQqgb.mount: Deactivated successfully. Feb 20 03:15:15 localhost podman[78625]: 2026-02-20 08:15:15.794184028 +0000 UTC m=+0.101305021 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.) Feb 20 03:15:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:15:15 localhost systemd[1]: tmp-crun.hgduVs.mount: Deactivated successfully. Feb 20 03:15:15 localhost podman[78624]: 2026-02-20 08:15:15.873904232 +0000 UTC m=+0.180874241 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=collectd, io.buildah.version=1.41.5, version=17.1.13, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Feb 20 03:15:15 localhost podman[78625]: 2026-02-20 08:15:15.904694368 +0000 UTC m=+0.211815331 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:15:15 localhost podman[78625]: unhealthy Feb 20 03:15:15 localhost podman[78624]: 2026-02-20 08:15:15.916648513 +0000 UTC m=+0.223618492 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, vcs-type=git, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd) Feb 20 03:15:15 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:15:15 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:15:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:15:15 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:15:15 localhost podman[78651]: 2026-02-20 08:15:15.886046152 +0000 UTC m=+0.086233582 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, container_name=iscsid, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20260112.1, release=1766032510, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:15:15 localhost podman[78651]: 2026-02-20 08:15:15.968714066 +0000 UTC m=+0.168901466 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, version=17.1.13, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3) Feb 20 03:15:15 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:15:16 localhost podman[78695]: 2026-02-20 08:15:16.011346844 +0000 UTC m=+0.072714995 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:15:16 localhost podman[78673]: 2026-02-20 08:15:16.056270003 +0000 UTC m=+0.176337178 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, io.openshift.expose-services=, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 20 03:15:16 localhost podman[78673]: 2026-02-20 08:15:16.078333732 +0000 UTC m=+0.198400857 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, io.openshift.expose-services=, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:15:16 localhost podman[78695]: 2026-02-20 08:15:16.085793174 +0000 UTC m=+0.147161405 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, architecture=x86_64, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git) Feb 20 03:15:16 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:15:16 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:15:31 localhost sshd[78731]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:15:41 localhost sshd[78733]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:15:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:15:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:15:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:15:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:15:42 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:15:42 localhost recover_tripleo_nova_virtqemud[78755]: 63703 Feb 20 03:15:42 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:15:42 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:15:42 localhost podman[78735]: 2026-02-20 08:15:42.45593689 +0000 UTC m=+0.088658537 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step1, release=1766032510, io.openshift.expose-services=, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:15:42 localhost podman[78737]: 2026-02-20 08:15:42.500665035 +0000 UTC m=+0.129809905 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible) Feb 20 03:15:42 localhost podman[78736]: 2026-02-20 08:15:42.565505835 +0000 UTC m=+0.197578145 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 20 03:15:42 localhost podman[78736]: 2026-02-20 08:15:42.624849377 +0000 UTC m=+0.256921727 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, release=1766032510, batch=17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z) Feb 20 03:15:42 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:15:42 localhost podman[78737]: 2026-02-20 08:15:42.639564816 +0000 UTC m=+0.268709706 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:15:42 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:15:42 localhost podman[78735]: 2026-02-20 08:15:42.6787606 +0000 UTC m=+0.311482227 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git) Feb 20 03:15:42 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:15:42 localhost podman[78738]: 2026-02-20 08:15:42.629081491 +0000 UTC m=+0.252311891 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:15:42 localhost podman[78738]: 2026-02-20 08:15:42.763826309 +0000 UTC m=+0.387056749 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:15:42 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:15:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:15:45 localhost podman[78842]: 2026-02-20 08:15:45.446274323 +0000 UTC m=+0.084667979 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, batch=17.1_20260112.1, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, container_name=nova_migration_target) Feb 20 03:15:45 localhost podman[78842]: 2026-02-20 08:15:45.83882304 +0000 UTC m=+0.477216746 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, distribution-scope=public, vcs-type=git, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4) Feb 20 03:15:45 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:15:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:15:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:15:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:15:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:15:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:15:46 localhost podman[78867]: 2026-02-20 08:15:46.44365511 +0000 UTC m=+0.076445266 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 20 03:15:46 localhost podman[78867]: 2026-02-20 08:15:46.471571598 +0000 UTC m=+0.104361834 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, architecture=x86_64, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 20 03:15:46 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:15:46 localhost podman[78868]: 2026-02-20 08:15:46.514454522 +0000 UTC m=+0.142637353 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:15:46 localhost podman[78875]: 2026-02-20 08:15:46.551476507 +0000 UTC m=+0.172727600 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.expose-services=, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:15:46 localhost podman[78868]: 2026-02-20 08:15:46.575956722 +0000 UTC m=+0.204139473 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3) Feb 20 03:15:46 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:15:46 localhost podman[78875]: 2026-02-20 08:15:46.586965471 +0000 UTC m=+0.208216574 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64) Feb 20 03:15:46 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:15:46 localhost podman[78869]: 2026-02-20 08:15:46.659523091 +0000 UTC m=+0.284398403 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, distribution-scope=public, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:15:46 localhost podman[78880]: 2026-02-20 08:15:46.714512213 +0000 UTC m=+0.333296629 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, release=1766032510, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64) Feb 20 03:15:46 localhost podman[78869]: 2026-02-20 08:15:46.734205038 +0000 UTC m=+0.359080370 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true) Feb 20 03:15:46 localhost podman[78880]: 2026-02-20 08:15:46.772856177 +0000 UTC m=+0.391640573 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:15:46 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:15:46 localhost podman[78880]: unhealthy Feb 20 03:15:46 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:15:46 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:15:56 localhost sshd[78974]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:16:12 localhost sshd[78976]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:16:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:16:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:16:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:16:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:16:13 localhost podman[78980]: 2026-02-20 08:16:13.085179132 +0000 UTC m=+0.067678808 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:16:13 localhost podman[78980]: 2026-02-20 08:16:13.117600733 +0000 UTC m=+0.100100459 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 20 03:16:13 localhost podman[78978]: 2026-02-20 08:16:13.1410716 +0000 UTC m=+0.124939944 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:16:13 localhost podman[78979]: 2026-02-20 08:16:13.203606627 +0000 UTC m=+0.186447422 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510) Feb 20 03:16:13 localhost podman[78981]: 2026-02-20 08:16:13.257321695 +0000 UTC m=+0.234946328 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, version=17.1.13, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=) Feb 20 03:16:13 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:16:13 localhost podman[78979]: 2026-02-20 08:16:13.283312152 +0000 UTC m=+0.266152927 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, config_id=tripleo_step4, io.buildah.version=1.41.5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:16:13 localhost podman[78981]: 2026-02-20 08:16:13.31123246 +0000 UTC m=+0.288857063 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 20 03:16:13 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:16:13 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:16:13 localhost podman[78978]: 2026-02-20 08:16:13.433367515 +0000 UTC m=+0.417235859 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.5, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=metrics_qdr, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z) Feb 20 03:16:13 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:16:15 localhost sshd[79078]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:16:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:16:16 localhost podman[79094]: 2026-02-20 08:16:16.057375702 +0000 UTC m=+0.086204502 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, batch=17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, container_name=nova_migration_target, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:16:16 localhost podman[79094]: 2026-02-20 08:16:16.432914247 +0000 UTC m=+0.461743007 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:16:16 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:16:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:16:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:16:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:16:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:16:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:16:17 localhost systemd[1]: tmp-crun.AXXbrS.mount: Deactivated successfully. Feb 20 03:16:17 localhost podman[79188]: 2026-02-20 08:16:17.36753894 +0000 UTC m=+0.076177860 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., container_name=collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:16:17 localhost systemd[1]: tmp-crun.OKMwBx.mount: Deactivated successfully. Feb 20 03:16:17 localhost podman[79188]: 2026-02-20 08:16:17.381740936 +0000 UTC m=+0.090379816 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:16:17 localhost podman[79194]: 2026-02-20 08:16:17.380961624 +0000 UTC m=+0.080702231 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, managed_by=tripleo_ansible) Feb 20 03:16:17 localhost podman[79181]: 2026-02-20 08:16:17.41727439 +0000 UTC m=+0.131739048 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-type=git, build-date=2026-01-12T22:34:43Z, container_name=iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:16:17 localhost podman[79194]: 2026-02-20 08:16:17.426765428 +0000 UTC m=+0.126506085 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, container_name=nova_compute, vendor=Red Hat, Inc.) Feb 20 03:16:17 localhost podman[79194]: unhealthy Feb 20 03:16:17 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:16:17 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:16:17 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:16:17 localhost podman[79181]: 2026-02-20 08:16:17.451781587 +0000 UTC m=+0.166246185 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, vcs-type=git, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc.) Feb 20 03:16:17 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:16:17 localhost podman[79182]: 2026-02-20 08:16:17.428734681 +0000 UTC m=+0.139531098 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, vcs-type=git, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Feb 20 03:16:17 localhost podman[79182]: 2026-02-20 08:16:17.508480216 +0000 UTC m=+0.219276603 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.expose-services=) Feb 20 03:16:17 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:16:17 localhost podman[79180]: 2026-02-20 08:16:17.52482141 +0000 UTC m=+0.241963740 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=) Feb 20 03:16:17 localhost podman[79180]: 2026-02-20 08:16:17.565843163 +0000 UTC m=+0.282985493 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible) Feb 20 03:16:17 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:16:19 localhost sshd[79290]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:16:21 localhost sshd[79292]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:16:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:16:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:16:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:16:43 localhost systemd[1]: tmp-crun.GFz9wj.mount: Deactivated successfully. Feb 20 03:16:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:16:43 localhost podman[79297]: 2026-02-20 08:16:43.513512639 +0000 UTC m=+0.145474631 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, release=1766032510, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:16:43 localhost podman[79296]: 2026-02-20 08:16:43.561733778 +0000 UTC m=+0.196389953 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, tcib_managed=true, container_name=logrotate_crond, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:16:43 localhost podman[79296]: 2026-02-20 08:16:43.571594066 +0000 UTC m=+0.206250241 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, build-date=2026-01-12T22:10:15Z, tcib_managed=true, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:16:43 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:16:43 localhost podman[79295]: 2026-02-20 08:16:43.486166186 +0000 UTC m=+0.123030051 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, version=17.1.13, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, release=1766032510, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Feb 20 03:16:43 localhost podman[79295]: 2026-02-20 08:16:43.620775371 +0000 UTC m=+0.257639226 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git) Feb 20 03:16:43 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:16:43 localhost podman[79297]: 2026-02-20 08:16:43.644357421 +0000 UTC m=+0.276319463 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, version=17.1.13) Feb 20 03:16:43 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:16:43 localhost podman[79334]: 2026-02-20 08:16:43.62704277 +0000 UTC m=+0.134103491 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, version=17.1.13, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:16:43 localhost podman[79334]: 2026-02-20 08:16:43.801739124 +0000 UTC m=+0.308799855 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, version=17.1.13, batch=17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5, release=1766032510) Feb 20 03:16:43 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:16:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:16:47 localhost podman[79397]: 2026-02-20 08:16:47.434641439 +0000 UTC m=+0.074168465 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1766032510, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20260112.1, version=17.1.13, com.redhat.component=openstack-nova-compute-container) Feb 20 03:16:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:16:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:16:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:16:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:16:47 localhost podman[79440]: 2026-02-20 08:16:47.620945587 +0000 UTC m=+0.090266702 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, release=1766032510, com.redhat.component=openstack-iscsid-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_id=tripleo_step3, vendor=Red Hat, Inc.) Feb 20 03:16:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:16:47 localhost podman[79440]: 2026-02-20 08:16:47.714711253 +0000 UTC m=+0.184032368 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team) Feb 20 03:16:47 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:16:47 localhost podman[79460]: 2026-02-20 08:16:47.68077303 +0000 UTC m=+0.133175516 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, tcib_managed=true, release=1766032510, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:16:47 localhost podman[79500]: 2026-02-20 08:16:47.703811727 +0000 UTC m=+0.062853798 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, batch=17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, container_name=ovn_metadata_agent) Feb 20 03:16:47 localhost podman[79460]: 2026-02-20 08:16:47.763677001 +0000 UTC m=+0.216079487 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 20 03:16:47 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:16:47 localhost podman[79447]: 2026-02-20 08:16:47.773678853 +0000 UTC m=+0.232265216 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, io.buildah.version=1.41.5, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, container_name=nova_compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, release=1766032510) Feb 20 03:16:47 localhost podman[79397]: 2026-02-20 08:16:47.78387689 +0000 UTC m=+0.423403856 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:16:47 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:16:47 localhost podman[79447]: 2026-02-20 08:16:47.821755448 +0000 UTC m=+0.280341761 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:16:47 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:16:47 localhost podman[79500]: 2026-02-20 08:16:47.833830656 +0000 UTC m=+0.192872727 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Feb 20 03:16:47 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:16:47 localhost podman[79442]: 2026-02-20 08:16:47.733244255 +0000 UTC m=+0.198728516 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Feb 20 03:16:47 localhost podman[79442]: 2026-02-20 08:16:47.914810314 +0000 UTC m=+0.380294605 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z) Feb 20 03:16:47 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:16:48 localhost systemd[1]: tmp-crun.luBZCj.mount: Deactivated successfully. Feb 20 03:16:52 localhost sshd[79625]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:16:54 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:16:54 localhost recover_tripleo_nova_virtqemud[79628]: 63703 Feb 20 03:16:54 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:16:54 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:16:57 localhost systemd[1]: libpod-4896fce42af9dd7b893ebb1c64ae9ab3636bab632c1f87ad547969ba6cbff4f6.scope: Deactivated successfully. Feb 20 03:16:57 localhost podman[79629]: 2026-02-20 08:16:57.606397852 +0000 UTC m=+0.047513641 container died 4896fce42af9dd7b893ebb1c64ae9ab3636bab632c1f87ad547969ba6cbff4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_id=tripleo_step5, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_wait_for_compute_service, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:16:57 localhost systemd[1]: tmp-crun.zSQO09.mount: Deactivated successfully. Feb 20 03:16:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4896fce42af9dd7b893ebb1c64ae9ab3636bab632c1f87ad547969ba6cbff4f6-userdata-shm.mount: Deactivated successfully. Feb 20 03:16:57 localhost systemd[1]: var-lib-containers-storage-overlay-7d8406dbd1199970186b7c17d6e01204ade90f840cc20a00e264f81a8db7e3fd-merged.mount: Deactivated successfully. Feb 20 03:16:57 localhost podman[79629]: 2026-02-20 08:16:57.646736586 +0000 UTC m=+0.087852395 container cleanup 4896fce42af9dd7b893ebb1c64ae9ab3636bab632c1f87ad547969ba6cbff4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, tcib_managed=true, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 20 03:16:57 localhost systemd[1]: libpod-conmon-4896fce42af9dd7b893ebb1c64ae9ab3636bab632c1f87ad547969ba6cbff4f6.scope: Deactivated successfully. Feb 20 03:16:57 localhost python3[77601]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=ca9e756af36a4b8ed088db0b68d5c381 --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Feb 20 03:16:58 localhost python3[79684]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:16:58 localhost python3[79700]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 20 03:16:59 localhost python3[79761]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575418.5806837-118841-171276904614476/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:16:59 localhost python3[79777]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 03:16:59 localhost systemd[1]: Reloading. Feb 20 03:16:59 localhost systemd-sysv-generator[79808]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:16:59 localhost systemd-rc-local-generator[79805]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:16:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:17:00 localhost python3[79829]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:17:00 localhost systemd[1]: Reloading. Feb 20 03:17:00 localhost systemd-rc-local-generator[79855]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:17:00 localhost systemd-sysv-generator[79858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:17:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:17:00 localhost systemd[1]: Starting nova_compute container... Feb 20 03:17:01 localhost tripleo-start-podman-container[79869]: Creating additional drop-in dependency for "nova_compute" (d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628) Feb 20 03:17:01 localhost systemd[1]: Reloading. Feb 20 03:17:01 localhost systemd-rc-local-generator[79925]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:17:01 localhost systemd-sysv-generator[79929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:17:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:17:01 localhost systemd[1]: Started nova_compute container. Feb 20 03:17:01 localhost python3[79968]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:17:03 localhost python3[80089]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005625202 step=5 update_config_hash_only=False Feb 20 03:17:03 localhost python3[80105]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 03:17:04 localhost python3[80121]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True Feb 20 03:17:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:17:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:17:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:17:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:17:14 localhost systemd[1]: tmp-crun.Ko6cm7.mount: Deactivated successfully. Feb 20 03:17:14 localhost podman[80123]: 2026-02-20 08:17:14.450331561 +0000 UTC m=+0.086307005 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, architecture=x86_64) Feb 20 03:17:14 localhost podman[80122]: 2026-02-20 08:17:14.504562694 +0000 UTC m=+0.140946318 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510) Feb 20 03:17:14 localhost podman[80123]: 2026-02-20 08:17:14.509864067 +0000 UTC m=+0.145839501 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, managed_by=tripleo_ansible, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, version=17.1.13) Feb 20 03:17:14 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:17:14 localhost podman[80125]: 2026-02-20 08:17:14.553014349 +0000 UTC m=+0.184376787 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, distribution-scope=public, architecture=x86_64) Feb 20 03:17:14 localhost podman[80124]: 2026-02-20 08:17:14.6094231 +0000 UTC m=+0.242932316 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:17:14 localhost podman[80125]: 2026-02-20 08:17:14.609328838 +0000 UTC m=+0.240691316 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1) Feb 20 03:17:14 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:17:14 localhost podman[80124]: 2026-02-20 08:17:14.657405352 +0000 UTC m=+0.290914538 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1766032510, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:17:14 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:17:14 localhost podman[80122]: 2026-02-20 08:17:14.750200872 +0000 UTC m=+0.386584436 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, distribution-scope=public, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:17:14 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:17:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:17:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:17:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:17:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:17:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:17:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:17:18 localhost systemd[1]: tmp-crun.zw2ame.mount: Deactivated successfully. Feb 20 03:17:18 localhost podman[80277]: 2026-02-20 08:17:18.46089189 +0000 UTC m=+0.089881421 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.13, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:17:18 localhost podman[80279]: 2026-02-20 08:17:18.473597275 +0000 UTC m=+0.094909567 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, container_name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=) Feb 20 03:17:18 localhost podman[80277]: 2026-02-20 08:17:18.498706476 +0000 UTC m=+0.127696017 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1) Feb 20 03:17:18 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:17:18 localhost podman[80279]: 2026-02-20 08:17:18.515559204 +0000 UTC m=+0.136871486 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1766032510, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team) Feb 20 03:17:18 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:17:18 localhost podman[80278]: 2026-02-20 08:17:18.526923512 +0000 UTC m=+0.154326170 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, release=1766032510, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container) Feb 20 03:17:18 localhost podman[80293]: 2026-02-20 08:17:18.558913521 +0000 UTC m=+0.175238859 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, container_name=nova_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:17:18 localhost podman[80278]: 2026-02-20 08:17:18.563941108 +0000 UTC m=+0.191343826 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 20 03:17:18 localhost podman[80280]: 2026-02-20 08:17:18.570807204 +0000 UTC m=+0.191172451 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, batch=17.1_20260112.1, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 20 03:17:18 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:17:18 localhost podman[80280]: 2026-02-20 08:17:18.58759384 +0000 UTC m=+0.207959087 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, container_name=collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, version=17.1.13, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, managed_by=tripleo_ansible) Feb 20 03:17:18 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:17:18 localhost podman[80286]: 2026-02-20 08:17:18.636660542 +0000 UTC m=+0.252529497 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, container_name=nova_migration_target, managed_by=tripleo_ansible, release=1766032510, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 20 03:17:18 localhost podman[80293]: 2026-02-20 08:17:18.663218633 +0000 UTC m=+0.279543961 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, container_name=nova_compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container) Feb 20 03:17:18 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:17:19 localhost podman[80286]: 2026-02-20 08:17:19.025835888 +0000 UTC m=+0.641704883 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.13, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container) Feb 20 03:17:19 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:17:19 localhost systemd[1]: tmp-crun.lKEc3a.mount: Deactivated successfully. Feb 20 03:17:29 localhost sshd[80429]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:17:30 localhost systemd-logind[760]: New session 33 of user zuul. Feb 20 03:17:30 localhost systemd[1]: Started Session 33 of User zuul. Feb 20 03:17:31 localhost python3[80538]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 03:17:35 localhost sshd[80725]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:17:38 localhost python3[80803]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None Feb 20 03:17:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:17:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:17:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:17:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:17:45 localhost podman[80821]: 2026-02-20 08:17:45.445867377 +0000 UTC m=+0.077771942 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, vcs-type=git, release=1766032510, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:17:45 localhost podman[80822]: 2026-02-20 08:17:45.459776875 +0000 UTC m=+0.086149990 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, distribution-scope=public, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, tcib_managed=true, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible) Feb 20 03:17:45 localhost podman[80823]: 2026-02-20 08:17:45.505553427 +0000 UTC m=+0.128586852 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.13, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:17:45 localhost podman[80820]: 2026-02-20 08:17:45.548276237 +0000 UTC m=+0.178239479 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Feb 20 03:17:45 localhost podman[80821]: 2026-02-20 08:17:45.558036942 +0000 UTC m=+0.189941487 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, container_name=ceilometer_agent_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 20 03:17:45 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:17:45 localhost podman[80822]: 2026-02-20 08:17:45.573946344 +0000 UTC m=+0.200319519 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:17:45 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:17:45 localhost podman[80823]: 2026-02-20 08:17:45.612159481 +0000 UTC m=+0.235192896 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.13, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, batch=17.1_20260112.1) Feb 20 03:17:45 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:17:45 localhost podman[80820]: 2026-02-20 08:17:45.797882453 +0000 UTC m=+0.427845715 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, version=17.1.13, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64) Feb 20 03:17:45 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:17:46 localhost python3[80993]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None Feb 20 03:17:46 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Feb 20 03:17:46 localhost systemd-journald[48906]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation. Feb 20 03:17:46 localhost systemd-journald[48906]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 20 03:17:46 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 03:17:46 localhost systemd[1]: tmp-crun.OEfFsU.mount: Deactivated successfully. Feb 20 03:17:46 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 03:17:48 localhost sshd[81061]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:17:48 localhost systemd[1]: tmp-crun.zT4ix6.mount: Deactivated successfully. Feb 20 03:17:48 localhost podman[81065]: 2026-02-20 08:17:48.784327309 +0000 UTC m=+0.103339297 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:17:48 localhost podman[81064]: 2026-02-20 08:17:48.828903039 +0000 UTC m=+0.148296827 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:17:48 localhost podman[81065]: 2026-02-20 08:17:48.832224979 +0000 UTC m=+0.151236947 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:17:48 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:17:48 localhost podman[81066]: 2026-02-20 08:17:48.9269242 +0000 UTC m=+0.241370794 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, release=1766032510, vendor=Red Hat, Inc.) Feb 20 03:17:48 localhost podman[81066]: 2026-02-20 08:17:48.939615925 +0000 UTC m=+0.254062469 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, version=17.1.13, build-date=2026-01-12T22:10:15Z, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible) Feb 20 03:17:48 localhost podman[81082]: 2026-02-20 08:17:48.890997675 +0000 UTC m=+0.198556001 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, container_name=nova_compute, config_id=tripleo_step5, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, version=17.1.13, build-date=2026-01-12T23:32:04Z) Feb 20 03:17:48 localhost podman[81082]: 2026-02-20 08:17:48.972363183 +0000 UTC m=+0.279921569 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, config_id=tripleo_step5, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=) Feb 20 03:17:48 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:17:48 localhost podman[81064]: 2026-02-20 08:17:48.992268735 +0000 UTC m=+0.311662563 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, distribution-scope=public, config_id=tripleo_step3) Feb 20 03:17:49 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:17:49 localhost podman[81063]: 2026-02-20 08:17:48.975716765 +0000 UTC m=+0.298805573 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, tcib_managed=true, release=1766032510, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:17:49 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:17:49 localhost podman[81063]: 2026-02-20 08:17:49.058960274 +0000 UTC m=+0.382049032 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 20 03:17:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:17:49 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:17:49 localhost podman[81178]: 2026-02-20 08:17:49.167905372 +0000 UTC m=+0.094008372 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, io.buildah.version=1.41.5, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, architecture=x86_64) Feb 20 03:17:49 localhost podman[81178]: 2026-02-20 08:17:49.553747637 +0000 UTC m=+0.479850647 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:17:49 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:17:49 localhost systemd[1]: tmp-crun.kn0OJj.mount: Deactivated successfully. Feb 20 03:18:05 localhost sshd[81201]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:18:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:18:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:18:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:18:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:18:16 localhost podman[81203]: 2026-02-20 08:18:16.451525263 +0000 UTC m=+0.080118397 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, tcib_managed=true, io.buildah.version=1.41.5, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64) Feb 20 03:18:16 localhost podman[81204]: 2026-02-20 08:18:16.513670951 +0000 UTC m=+0.138567905 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:18:16 localhost podman[81204]: 2026-02-20 08:18:16.547774657 +0000 UTC m=+0.172671611 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510) Feb 20 03:18:16 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:18:16 localhost systemd[1]: tmp-crun.qhaVZt.mount: Deactivated successfully. Feb 20 03:18:16 localhost podman[81206]: 2026-02-20 08:18:16.609109823 +0000 UTC m=+0.231856468 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:18:16 localhost podman[81205]: 2026-02-20 08:18:16.588261156 +0000 UTC m=+0.212352098 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1766032510, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20260112.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, distribution-scope=public) Feb 20 03:18:16 localhost podman[81206]: 2026-02-20 08:18:16.659859292 +0000 UTC m=+0.282605937 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 20 03:18:16 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:18:16 localhost podman[81205]: 2026-02-20 08:18:16.679709011 +0000 UTC m=+0.303799853 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1766032510, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, io.openshift.expose-services=, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:18:16 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:18:16 localhost podman[81203]: 2026-02-20 08:18:16.699287592 +0000 UTC m=+0.327880676 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, distribution-scope=public, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step1, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Feb 20 03:18:16 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:18:19 localhost podman[81314]: 2026-02-20 08:18:19.209436211 +0000 UTC m=+0.080358044 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, tcib_managed=true, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public) Feb 20 03:18:19 localhost systemd[1]: tmp-crun.p0soxh.mount: Deactivated successfully. Feb 20 03:18:19 localhost podman[81314]: 2026-02-20 08:18:19.263728695 +0000 UTC m=+0.134650568 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, release=1766032510, version=17.1.13, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=nova_compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:18:19 localhost systemd[1]: tmp-crun.hNXGH5.mount: Deactivated successfully. Feb 20 03:18:19 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:18:19 localhost podman[81326]: 2026-02-20 08:18:19.28120256 +0000 UTC m=+0.143158199 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:18:19 localhost podman[81311]: 2026-02-20 08:18:19.254536485 +0000 UTC m=+0.135855280 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git, build-date=2026-01-12T22:34:43Z, version=17.1.13, container_name=iscsid, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:18:19 localhost podman[81312]: 2026-02-20 08:18:19.316748055 +0000 UTC m=+0.190737672 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2026-01-12T22:36:40Z, tcib_managed=true, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, version=17.1.13, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Feb 20 03:18:19 localhost podman[81326]: 2026-02-20 08:18:19.346553585 +0000 UTC m=+0.208509274 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, version=17.1.13, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 20 03:18:19 localhost podman[81320]: 2026-02-20 08:18:19.37950553 +0000 UTC m=+0.239605809 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, tcib_managed=true) Feb 20 03:18:19 localhost podman[81311]: 2026-02-20 08:18:19.391183957 +0000 UTC m=+0.272502792 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public) Feb 20 03:18:19 localhost podman[81312]: 2026-02-20 08:18:19.404583491 +0000 UTC m=+0.278573118 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, architecture=x86_64) Feb 20 03:18:19 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:18:19 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:18:19 localhost podman[81320]: 2026-02-20 08:18:19.447795415 +0000 UTC m=+0.307895714 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, tcib_managed=true, vcs-type=git, version=17.1.13, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 20 03:18:19 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:18:19 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:18:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:18:20 localhost podman[81466]: 2026-02-20 08:18:20.184872914 +0000 UTC m=+0.074370720 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:18:20 localhost podman[81466]: 2026-02-20 08:18:20.557993728 +0000 UTC m=+0.447491564 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Feb 20 03:18:20 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:18:42 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:18:42 localhost recover_tripleo_nova_virtqemud[81505]: 63703 Feb 20 03:18:42 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:18:42 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:18:46 localhost systemd[1]: session-33.scope: Deactivated successfully. Feb 20 03:18:46 localhost systemd[1]: session-33.scope: Consumed 5.630s CPU time. Feb 20 03:18:46 localhost systemd-logind[760]: Session 33 logged out. Waiting for processes to exit. Feb 20 03:18:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:18:46 localhost systemd-logind[760]: Removed session 33. Feb 20 03:18:46 localhost podman[81506]: 2026-02-20 08:18:46.682428398 +0000 UTC m=+0.088804854 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1) Feb 20 03:18:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:18:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:18:46 localhost podman[81506]: 2026-02-20 08:18:46.737800112 +0000 UTC m=+0.144176518 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1, distribution-scope=public, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:18:46 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:18:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:18:46 localhost podman[81527]: 2026-02-20 08:18:46.795588761 +0000 UTC m=+0.088951176 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 20 03:18:46 localhost podman[81527]: 2026-02-20 08:18:46.830759667 +0000 UTC m=+0.124122002 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 20 03:18:46 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:18:46 localhost podman[81557]: 2026-02-20 08:18:46.851967522 +0000 UTC m=+0.074096073 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:18:46 localhost podman[81539]: 2026-02-20 08:18:46.900706276 +0000 UTC m=+0.182444716 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2026-01-12T22:10:15Z, architecture=x86_64, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:18:46 localhost podman[81539]: 2026-02-20 08:18:46.908641662 +0000 UTC m=+0.190380102 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, architecture=x86_64, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:18:46 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:18:47 localhost podman[81557]: 2026-02-20 08:18:47.023703377 +0000 UTC m=+0.245831948 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible) Feb 20 03:18:47 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:18:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:18:49 localhost podman[81629]: 2026-02-20 08:18:49.453174335 +0000 UTC m=+0.085632278 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, container_name=nova_compute, architecture=x86_64, config_id=tripleo_step5, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc.) Feb 20 03:18:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:18:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:18:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:18:49 localhost podman[81629]: 2026-02-20 08:18:49.516808223 +0000 UTC m=+0.149266156 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, release=1766032510, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, io.buildah.version=1.41.5) Feb 20 03:18:49 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:18:49 localhost podman[81653]: 2026-02-20 08:18:49.570631805 +0000 UTC m=+0.088672630 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:18:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:18:49 localhost podman[81652]: 2026-02-20 08:18:49.635597219 +0000 UTC m=+0.157678363 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, version=17.1.13, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git) Feb 20 03:18:49 localhost podman[81653]: 2026-02-20 08:18:49.646048343 +0000 UTC m=+0.164089118 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container) Feb 20 03:18:49 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:18:49 localhost podman[81652]: 2026-02-20 08:18:49.677674282 +0000 UTC m=+0.199755436 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vcs-type=git, managed_by=tripleo_ansible, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:18:49 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:18:49 localhost podman[81691]: 2026-02-20 08:18:49.650186055 +0000 UTC m=+0.066173727 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git) Feb 20 03:18:49 localhost podman[81691]: 2026-02-20 08:18:49.729579322 +0000 UTC m=+0.145566984 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, version=17.1.13, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:18:49 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:18:49 localhost podman[81655]: 2026-02-20 08:18:49.817642584 +0000 UTC m=+0.331343440 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1766032510, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z) Feb 20 03:18:49 localhost podman[81655]: 2026-02-20 08:18:49.892874827 +0000 UTC m=+0.406575633 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, version=17.1.13, maintainer=OpenStack TripleO Team, distribution-scope=public) Feb 20 03:18:49 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:18:50 localhost systemd[1]: tmp-crun.Ez7BWP.mount: Deactivated successfully. Feb 20 03:18:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:18:51 localhost podman[81765]: 2026-02-20 08:18:51.443102393 +0000 UTC m=+0.080231960 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:18:51 localhost podman[81765]: 2026-02-20 08:18:51.808623071 +0000 UTC m=+0.445752608 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, container_name=nova_migration_target) Feb 20 03:18:51 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:18:54 localhost sshd[81789]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:18:54 localhost systemd-logind[760]: New session 34 of user zuul. Feb 20 03:18:54 localhost systemd[1]: Started Session 34 of User zuul. Feb 20 03:18:55 localhost python3[81808]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 03:18:57 localhost sshd[81810]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:19:16 localhost sshd[81812]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:19:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:19:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:19:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:19:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:19:17 localhost systemd[1]: tmp-crun.wGEXAJ.mount: Deactivated successfully. Feb 20 03:19:17 localhost podman[81815]: 2026-02-20 08:19:17.305290416 +0000 UTC m=+0.091695441 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:19:17 localhost systemd[1]: tmp-crun.ZvGJPC.mount: Deactivated successfully. Feb 20 03:19:17 localhost podman[81814]: 2026-02-20 08:19:17.348803308 +0000 UTC m=+0.135300726 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:19:17 localhost podman[81815]: 2026-02-20 08:19:17.360696441 +0000 UTC m=+0.147101456 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1766032510, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:19:17 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:19:17 localhost podman[81817]: 2026-02-20 08:19:17.455224649 +0000 UTC m=+0.240170255 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true) Feb 20 03:19:17 localhost podman[81817]: 2026-02-20 08:19:17.492808529 +0000 UTC m=+0.277754175 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, distribution-scope=public, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, version=17.1.13) Feb 20 03:19:17 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:19:17 localhost podman[81816]: 2026-02-20 08:19:17.499549543 +0000 UTC m=+0.285423354 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 20 03:19:17 localhost podman[81814]: 2026-02-20 08:19:17.572994887 +0000 UTC m=+0.359492335 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, release=1766032510, io.openshift.expose-services=, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:19:17 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:19:17 localhost podman[81816]: 2026-02-20 08:19:17.62832245 +0000 UTC m=+0.414196261 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:19:17 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:19:18 localhost systemd[1]: tmp-crun.7kkae6.mount: Deactivated successfully. Feb 20 03:19:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:19:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:19:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:19:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:19:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:19:20 localhost podman[81918]: 2026-02-20 08:19:20.468654467 +0000 UTC m=+0.092230296 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.13, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, release=1766032510, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Feb 20 03:19:20 localhost systemd[1]: tmp-crun.bGZRTD.mount: Deactivated successfully. Feb 20 03:19:20 localhost podman[81917]: 2026-02-20 08:19:20.509031884 +0000 UTC m=+0.137904197 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container) Feb 20 03:19:20 localhost podman[81920]: 2026-02-20 08:19:20.515344655 +0000 UTC m=+0.136693504 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5) Feb 20 03:19:20 localhost podman[81918]: 2026-02-20 08:19:20.527951138 +0000 UTC m=+0.151527017 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:19:20 localhost podman[81920]: 2026-02-20 08:19:20.536828589 +0000 UTC m=+0.158177458 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:19:20 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:19:20 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:19:20 localhost podman[81915]: 2026-02-20 08:19:20.444632074 +0000 UTC m=+0.076934351 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team) Feb 20 03:19:20 localhost podman[81917]: 2026-02-20 08:19:20.554267242 +0000 UTC m=+0.183139555 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, container_name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:19:20 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:19:20 localhost podman[81916]: 2026-02-20 08:19:20.607861448 +0000 UTC m=+0.240625777 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, container_name=iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, url=https://www.redhat.com, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, vcs-type=git) Feb 20 03:19:20 localhost podman[81915]: 2026-02-20 08:19:20.631433058 +0000 UTC m=+0.263735355 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, build-date=2026-01-12T22:56:19Z) Feb 20 03:19:20 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:19:20 localhost podman[81916]: 2026-02-20 08:19:20.643182197 +0000 UTC m=+0.275946566 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:19:20 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:19:20 localhost python3[82046]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 20 03:19:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:19:22 localhost sshd[82101]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:19:22 localhost systemd[1]: tmp-crun.cPTQJx.mount: Deactivated successfully. Feb 20 03:19:22 localhost podman[82102]: 2026-02-20 08:19:22.47312135 +0000 UTC m=+0.107948683 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, release=1766032510, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:19:22 localhost podman[82102]: 2026-02-20 08:19:22.877992547 +0000 UTC m=+0.512819890 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1766032510, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, vcs-type=git) Feb 20 03:19:22 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:19:24 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 03:19:24 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 03:19:24 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 03:19:24 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 03:19:24 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 03:19:24 localhost systemd[1]: run-rf2092b2223c14d679fe56e07631b1b12.service: Deactivated successfully. Feb 20 03:19:24 localhost systemd[1]: run-rab76fb12cdd246659fecab19ca20663c.service: Deactivated successfully. Feb 20 03:19:27 localhost sshd[82288]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:19:38 localhost sshd[82290]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:19:45 localhost sshd[82292]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:19:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 03:19:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4566 writes, 20K keys, 4566 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4566 writes, 473 syncs, 9.65 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 03:19:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:19:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:19:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:19:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:19:48 localhost podman[82296]: 2026-02-20 08:19:48.464593233 +0000 UTC m=+0.091133196 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, release=1766032510, vcs-type=git) Feb 20 03:19:48 localhost podman[82296]: 2026-02-20 08:19:48.476753973 +0000 UTC m=+0.103293946 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, release=1766032510, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:19:48 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:19:48 localhost podman[82297]: 2026-02-20 08:19:48.562310167 +0000 UTC m=+0.182733914 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, release=1766032510, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:19:48 localhost podman[82295]: 2026-02-20 08:19:48.612184322 +0000 UTC m=+0.238724415 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, version=17.1.13, batch=17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:19:48 localhost podman[82297]: 2026-02-20 08:19:48.638857636 +0000 UTC m=+0.259281403 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:19:48 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:19:48 localhost podman[82295]: 2026-02-20 08:19:48.68797101 +0000 UTC m=+0.314511153 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, release=1766032510, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Feb 20 03:19:48 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:19:48 localhost podman[82294]: 2026-02-20 08:19:48.768462397 +0000 UTC m=+0.395959647 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, container_name=metrics_qdr, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:19:48 localhost podman[82294]: 2026-02-20 08:19:48.995991126 +0000 UTC m=+0.623488376 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:19:49 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:19:49 localhost systemd[1]: tmp-crun.hrzG26.mount: Deactivated successfully. Feb 20 03:19:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:19:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:19:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:19:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:19:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:19:51 localhost podman[82440]: 2026-02-20 08:19:51.444689495 +0000 UTC m=+0.082249794 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Feb 20 03:19:51 localhost podman[82443]: 2026-02-20 08:19:51.459524738 +0000 UTC m=+0.084864046 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vcs-type=git) Feb 20 03:19:51 localhost podman[82440]: 2026-02-20 08:19:51.480767165 +0000 UTC m=+0.118327494 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., architecture=x86_64) Feb 20 03:19:51 localhost podman[82455]: 2026-02-20 08:19:51.48833791 +0000 UTC m=+0.115586590 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z) Feb 20 03:19:51 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:19:51 localhost podman[82455]: 2026-02-20 08:19:51.538513393 +0000 UTC m=+0.165762083 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, container_name=nova_compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, url=https://www.redhat.com) Feb 20 03:19:51 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:19:51 localhost podman[82442]: 2026-02-20 08:19:51.555847445 +0000 UTC m=+0.189762956 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.5, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64) Feb 20 03:19:51 localhost podman[82443]: 2026-02-20 08:19:51.572453846 +0000 UTC m=+0.197793144 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, managed_by=tripleo_ansible, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, release=1766032510, version=17.1.13) Feb 20 03:19:51 localhost podman[82442]: 2026-02-20 08:19:51.581997225 +0000 UTC m=+0.215912776 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller) Feb 20 03:19:51 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:19:51 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:19:51 localhost podman[82441]: 2026-02-20 08:19:51.662626005 +0000 UTC m=+0.298708625 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Feb 20 03:19:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 03:19:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 5009 writes, 22K keys, 5009 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5009 writes, 566 syncs, 8.85 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 03:19:51 localhost podman[82441]: 2026-02-20 08:19:51.669778958 +0000 UTC m=+0.305861598 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, release=1766032510) Feb 20 03:19:51 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:19:52 localhost systemd[1]: tmp-crun.oy2WDI.mount: Deactivated successfully. Feb 20 03:19:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:19:53 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:19:53 localhost recover_tripleo_nova_virtqemud[82552]: 63703 Feb 20 03:19:53 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:19:53 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:19:53 localhost systemd[1]: tmp-crun.boYoQN.mount: Deactivated successfully. Feb 20 03:19:53 localhost podman[82547]: 2026-02-20 08:19:53.452500059 +0000 UTC m=+0.089830851 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, architecture=x86_64, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5) Feb 20 03:19:53 localhost podman[82547]: 2026-02-20 08:19:53.855939837 +0000 UTC m=+0.493270689 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=nova_migration_target, version=17.1.13, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:19:53 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:19:58 localhost python3[82585]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 03:20:01 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 03:20:01 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 03:20:18 localhost sshd[82775]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:20:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:20:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:20:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:20:18 localhost systemd[1]: tmp-crun.WW6ZTH.mount: Deactivated successfully. Feb 20 03:20:18 localhost podman[82777]: 2026-02-20 08:20:18.975493458 +0000 UTC m=+0.090045677 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.buildah.version=1.41.5) Feb 20 03:20:19 localhost podman[82777]: 2026-02-20 08:20:19.018531707 +0000 UTC m=+0.133083926 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1) Feb 20 03:20:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:20:19 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:20:19 localhost podman[82778]: 2026-02-20 08:20:19.027646834 +0000 UTC m=+0.141816553 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, container_name=logrotate_crond, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container) Feb 20 03:20:19 localhost podman[82779]: 2026-02-20 08:20:18.951165107 +0000 UTC m=+0.066193139 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true) Feb 20 03:20:19 localhost podman[82778]: 2026-02-20 08:20:19.06174733 +0000 UTC m=+0.175917009 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, io.buildah.version=1.41.5, architecture=x86_64, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git) Feb 20 03:20:19 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:20:19 localhost podman[82779]: 2026-02-20 08:20:19.07973297 +0000 UTC m=+0.194761052 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, version=17.1.13, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5) Feb 20 03:20:19 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:20:19 localhost podman[82844]: 2026-02-20 08:20:19.169404825 +0000 UTC m=+0.132832549 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.buildah.version=1.41.5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 20 03:20:19 localhost podman[82844]: 2026-02-20 08:20:19.341771916 +0000 UTC m=+0.305199630 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, version=17.1.13, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, distribution-scope=public, container_name=metrics_qdr, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1) Feb 20 03:20:19 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:20:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:20:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:20:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:20:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:20:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:20:22 localhost systemd[1]: tmp-crun.Qryi24.mount: Deactivated successfully. Feb 20 03:20:22 localhost podman[82877]: 2026-02-20 08:20:22.465599642 +0000 UTC m=+0.096978334 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, release=1766032510, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, distribution-scope=public, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., container_name=iscsid) Feb 20 03:20:22 localhost podman[82877]: 2026-02-20 08:20:22.475311857 +0000 UTC m=+0.106690569 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, build-date=2026-01-12T22:34:43Z, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git) Feb 20 03:20:22 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:20:22 localhost podman[82876]: 2026-02-20 08:20:22.518355256 +0000 UTC m=+0.155189406 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 20 03:20:22 localhost podman[82882]: 2026-02-20 08:20:22.57485043 +0000 UTC m=+0.202253784 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, url=https://www.redhat.com) Feb 20 03:20:22 localhost podman[82876]: 2026-02-20 08:20:22.589691364 +0000 UTC m=+0.226525524 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 20 03:20:22 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:20:22 localhost podman[82882]: 2026-02-20 08:20:22.610662153 +0000 UTC m=+0.238065537 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, container_name=collectd, config_id=tripleo_step3, io.buildah.version=1.41.5, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible) Feb 20 03:20:22 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:20:22 localhost podman[82878]: 2026-02-20 08:20:22.620777708 +0000 UTC m=+0.248931783 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, config_id=tripleo_step4, tcib_managed=true, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller) Feb 20 03:20:22 localhost podman[82884]: 2026-02-20 08:20:22.673283693 +0000 UTC m=+0.294675614 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step5, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:20:22 localhost podman[82878]: 2026-02-20 08:20:22.69745992 +0000 UTC m=+0.325614005 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:20:22 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:20:22 localhost podman[82884]: 2026-02-20 08:20:22.754133529 +0000 UTC m=+0.375525500 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, version=17.1.13) Feb 20 03:20:22 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:20:23 localhost systemd[1]: tmp-crun.aegX7g.mount: Deactivated successfully. Feb 20 03:20:23 localhost sshd[82985]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:20:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:20:24 localhost podman[82987]: 2026-02-20 08:20:24.4429852 +0000 UTC m=+0.082526332 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z) Feb 20 03:20:24 localhost podman[82987]: 2026-02-20 08:20:24.838928335 +0000 UTC m=+0.478469467 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:20:24 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:20:43 localhost sshd[83138]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:20:48 localhost python3[83155]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 03:20:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:20:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:20:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:20:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:20:49 localhost podman[83158]: 2026-02-20 08:20:49.450508947 +0000 UTC m=+0.088740542 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git) Feb 20 03:20:49 localhost podman[83159]: 2026-02-20 08:20:49.504120553 +0000 UTC m=+0.140750774 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true) Feb 20 03:20:49 localhost podman[83158]: 2026-02-20 08:20:49.508700718 +0000 UTC m=+0.146932303 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5) Feb 20 03:20:49 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:20:49 localhost podman[83159]: 2026-02-20 08:20:49.541646333 +0000 UTC m=+0.178276504 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, batch=17.1_20260112.1, release=1766032510, distribution-scope=public, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.13, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:20:49 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:20:49 localhost podman[83160]: 2026-02-20 08:20:49.555227241 +0000 UTC m=+0.190158835 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:20:49 localhost podman[83160]: 2026-02-20 08:20:49.608359844 +0000 UTC m=+0.243291488 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, release=1766032510, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64) Feb 20 03:20:49 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:20:49 localhost podman[83161]: 2026-02-20 08:20:49.610564664 +0000 UTC m=+0.238206041 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.openshift.expose-services=, vcs-type=git, build-date=2026-01-12T22:10:14Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:20:49 localhost podman[83161]: 2026-02-20 08:20:49.784800436 +0000 UTC m=+0.412441853 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:20:49 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:20:51 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 03:20:52 localhost rhsm-service[6614]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Feb 20 03:20:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:20:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:20:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:20:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:20:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:20:53 localhost podman[83441]: 2026-02-20 08:20:53.451777685 +0000 UTC m=+0.079870700 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=) Feb 20 03:20:53 localhost podman[83432]: 2026-02-20 08:20:53.430462717 +0000 UTC m=+0.073939420 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, tcib_managed=true, container_name=ovn_metadata_agent, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:20:53 localhost podman[83434]: 2026-02-20 08:20:53.495194645 +0000 UTC m=+0.129455217 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z) Feb 20 03:20:53 localhost podman[83432]: 2026-02-20 08:20:53.509409231 +0000 UTC m=+0.152885874 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, batch=17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., version=17.1.13) Feb 20 03:20:53 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:20:53 localhost podman[83433]: 2026-02-20 08:20:53.546957381 +0000 UTC m=+0.183436853 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, container_name=iscsid, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5) Feb 20 03:20:53 localhost podman[83433]: 2026-02-20 08:20:53.555340238 +0000 UTC m=+0.191819740 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, container_name=iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git) Feb 20 03:20:53 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:20:53 localhost podman[83434]: 2026-02-20 08:20:53.568956298 +0000 UTC m=+0.203216870 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, architecture=x86_64) Feb 20 03:20:53 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:20:53 localhost podman[83441]: 2026-02-20 08:20:53.626941593 +0000 UTC m=+0.255034608 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, container_name=nova_compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., config_id=tripleo_step5) Feb 20 03:20:53 localhost podman[83440]: 2026-02-20 08:20:53.658314495 +0000 UTC m=+0.290864351 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2026-01-12T22:10:15Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, architecture=x86_64) Feb 20 03:20:53 localhost podman[83440]: 2026-02-20 08:20:53.667444803 +0000 UTC m=+0.299994659 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5) Feb 20 03:20:53 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:20:53 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:20:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:20:55 localhost systemd[1]: tmp-crun.JJ0cvq.mount: Deactivated successfully. Feb 20 03:20:55 localhost podman[83601]: 2026-02-20 08:20:55.454174863 +0000 UTC m=+0.090459498 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, container_name=nova_migration_target, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:20:55 localhost podman[83601]: 2026-02-20 08:20:55.820531263 +0000 UTC m=+0.456815918 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, container_name=nova_migration_target, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:20:55 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:20:56 localhost sshd[83624]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:21:00 localhost sshd[83626]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:21:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:21:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:21:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:21:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:21:20 localhost systemd[1]: tmp-crun.Zp3emp.mount: Deactivated successfully. Feb 20 03:21:20 localhost podman[83628]: 2026-02-20 08:21:20.457946939 +0000 UTC m=+0.093996234 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, config_id=tripleo_step1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:21:20 localhost podman[83630]: 2026-02-20 08:21:20.493921175 +0000 UTC m=+0.124511272 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vendor=Red Hat, Inc.) Feb 20 03:21:20 localhost podman[83633]: 2026-02-20 08:21:20.549846925 +0000 UTC m=+0.174812199 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 20 03:21:20 localhost podman[83630]: 2026-02-20 08:21:20.577207868 +0000 UTC m=+0.207798005 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, batch=17.1_20260112.1, config_id=tripleo_step4, release=1766032510, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:21:20 localhost podman[83629]: 2026-02-20 08:21:20.61705676 +0000 UTC m=+0.249372914 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container) Feb 20 03:21:20 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:21:20 localhost podman[83629]: 2026-02-20 08:21:20.644709941 +0000 UTC m=+0.277026095 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute) Feb 20 03:21:20 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:21:20 localhost podman[83633]: 2026-02-20 08:21:20.678961302 +0000 UTC m=+0.303926566 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=) Feb 20 03:21:20 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:21:20 localhost podman[83628]: 2026-02-20 08:21:20.693901628 +0000 UTC m=+0.329950913 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1766032510, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container) Feb 20 03:21:20 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:21:22 localhost python3[83745]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Feb 20 03:21:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:21:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:21:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:21:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:21:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:21:24 localhost podman[83746]: 2026-02-20 08:21:24.451532198 +0000 UTC m=+0.090743195 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, tcib_managed=true, distribution-scope=public, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git) Feb 20 03:21:24 localhost podman[83746]: 2026-02-20 08:21:24.487142635 +0000 UTC m=+0.126353642 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 20 03:21:24 localhost systemd[1]: tmp-crun.9MIeoR.mount: Deactivated successfully. Feb 20 03:21:24 localhost podman[83760]: 2026-02-20 08:21:24.507867388 +0000 UTC m=+0.127437082 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.13, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step5, container_name=nova_compute) Feb 20 03:21:24 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:21:24 localhost podman[83760]: 2026-02-20 08:21:24.560333713 +0000 UTC m=+0.179903437 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, distribution-scope=public, batch=17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510) Feb 20 03:21:24 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:21:24 localhost podman[83748]: 2026-02-20 08:21:24.563067378 +0000 UTC m=+0.193851246 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.5, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510) Feb 20 03:21:24 localhost podman[83747]: 2026-02-20 08:21:24.617961569 +0000 UTC m=+0.253044115 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team) Feb 20 03:21:24 localhost podman[83747]: 2026-02-20 08:21:24.630714145 +0000 UTC m=+0.265796731 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Feb 20 03:21:24 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:21:24 localhost podman[83748]: 2026-02-20 08:21:24.646270268 +0000 UTC m=+0.277054186 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, io.openshift.expose-services=, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:21:24 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:21:24 localhost podman[83754]: 2026-02-20 08:21:24.714148241 +0000 UTC m=+0.337925980 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:21:24 localhost podman[83754]: 2026-02-20 08:21:24.730625659 +0000 UTC m=+0.354403398 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:21:24 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:21:25 localhost systemd[1]: tmp-crun.l3MX7a.mount: Deactivated successfully. Feb 20 03:21:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:21:26 localhost podman[83857]: 2026-02-20 08:21:26.440908931 +0000 UTC m=+0.081976417 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:21:26 localhost podman[83857]: 2026-02-20 08:21:26.809488622 +0000 UTC m=+0.450556068 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, version=17.1.13, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:21:26 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:21:42 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:21:42 localhost recover_tripleo_nova_virtqemud[83957]: 63703 Feb 20 03:21:42 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:21:42 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:21:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:21:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:21:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:21:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:21:51 localhost systemd[1]: tmp-crun.7Os7vx.mount: Deactivated successfully. Feb 20 03:21:51 localhost podman[84003]: 2026-02-20 08:21:51.462727197 +0000 UTC m=+0.093929622 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, architecture=x86_64, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com) Feb 20 03:21:51 localhost systemd[1]: tmp-crun.sRseYn.mount: Deactivated successfully. Feb 20 03:21:51 localhost podman[84004]: 2026-02-20 08:21:51.511185534 +0000 UTC m=+0.141114084 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, batch=17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=) Feb 20 03:21:51 localhost podman[84006]: 2026-02-20 08:21:51.547769957 +0000 UTC m=+0.172595649 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, architecture=x86_64, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team) Feb 20 03:21:51 localhost podman[84004]: 2026-02-20 08:21:51.564350258 +0000 UTC m=+0.194278808 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container) Feb 20 03:21:51 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:21:51 localhost podman[84006]: 2026-02-20 08:21:51.578485822 +0000 UTC m=+0.203311504 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, version=17.1.13, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, url=https://www.redhat.com) Feb 20 03:21:51 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:21:51 localhost podman[84003]: 2026-02-20 08:21:51.643323242 +0000 UTC m=+0.274525657 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, architecture=x86_64, config_id=tripleo_step1, container_name=metrics_qdr, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., version=17.1.13, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:21:51 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:21:51 localhost podman[84005]: 2026-02-20 08:21:51.654753433 +0000 UTC m=+0.282675349 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, tcib_managed=true, container_name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:21:51 localhost podman[84005]: 2026-02-20 08:21:51.661353232 +0000 UTC m=+0.289275128 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:21:51 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:21:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:21:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:21:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:21:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:21:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:21:55 localhost podman[84105]: 2026-02-20 08:21:55.439941011 +0000 UTC m=+0.069131319 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, container_name=iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 20 03:21:55 localhost podman[84105]: 2026-02-20 08:21:55.477727048 +0000 UTC m=+0.106917286 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, release=1766032510, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com) Feb 20 03:21:55 localhost systemd[1]: tmp-crun.gjbKMG.mount: Deactivated successfully. Feb 20 03:21:55 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:21:55 localhost podman[84104]: 2026-02-20 08:21:55.495994644 +0000 UTC m=+0.129640432 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:21:55 localhost podman[84113]: 2026-02-20 08:21:55.552480827 +0000 UTC m=+0.169954326 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:21:55 localhost podman[84106]: 2026-02-20 08:21:55.526363578 +0000 UTC m=+0.152389760 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc.) Feb 20 03:21:55 localhost podman[84113]: 2026-02-20 08:21:55.598832167 +0000 UTC m=+0.216305636 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:21:55 localhost podman[84106]: 2026-02-20 08:21:55.608856459 +0000 UTC m=+0.234882641 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vcs-type=git, container_name=ovn_controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4) Feb 20 03:21:55 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:21:55 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:21:55 localhost podman[84104]: 2026-02-20 08:21:55.629157121 +0000 UTC m=+0.262803009 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, release=1766032510, io.buildah.version=1.41.5, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 20 03:21:55 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:21:55 localhost podman[84107]: 2026-02-20 08:21:55.693043465 +0000 UTC m=+0.315217982 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, version=17.1.13, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, batch=17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, build-date=2026-01-12T22:10:15Z) Feb 20 03:21:55 localhost podman[84107]: 2026-02-20 08:21:55.70275195 +0000 UTC m=+0.324926477 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=collectd, config_id=tripleo_step3, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1766032510, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Feb 20 03:21:55 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:21:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:21:57 localhost podman[84212]: 2026-02-20 08:21:57.444347143 +0000 UTC m=+0.084880247 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1766032510, io.openshift.expose-services=, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 20 03:21:57 localhost podman[84212]: 2026-02-20 08:21:57.804234717 +0000 UTC m=+0.444767841 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, version=17.1.13, vcs-type=git, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:21:57 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:22:10 localhost sshd[84236]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:22:10 localhost sshd[84238]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:22:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:22:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:22:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:22:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:22:22 localhost systemd[1]: tmp-crun.JV4xb2.mount: Deactivated successfully. Feb 20 03:22:22 localhost systemd[1]: session-34.scope: Deactivated successfully. Feb 20 03:22:22 localhost systemd[1]: session-34.scope: Consumed 19.323s CPU time. Feb 20 03:22:22 localhost systemd-logind[760]: Session 34 logged out. Waiting for processes to exit. Feb 20 03:22:22 localhost systemd-logind[760]: Removed session 34. Feb 20 03:22:22 localhost podman[84241]: 2026-02-20 08:22:22.45954646 +0000 UTC m=+0.090891719 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:22:22 localhost podman[84242]: 2026-02-20 08:22:22.436006521 +0000 UTC m=+0.069803547 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, architecture=x86_64, distribution-scope=public, version=17.1.13, release=1766032510, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 20 03:22:22 localhost podman[84240]: 2026-02-20 08:22:22.501776747 +0000 UTC m=+0.136957431 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, release=1766032510, io.openshift.expose-services=, config_id=tripleo_step1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:22:22 localhost podman[84241]: 2026-02-20 08:22:22.508850659 +0000 UTC m=+0.140195918 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5) Feb 20 03:22:22 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:22:22 localhost podman[84242]: 2026-02-20 08:22:22.519762786 +0000 UTC m=+0.153559812 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, architecture=x86_64, tcib_managed=true, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:22:22 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:22:22 localhost podman[84246]: 2026-02-20 08:22:22.566410122 +0000 UTC m=+0.189909089 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4) Feb 20 03:22:22 localhost podman[84246]: 2026-02-20 08:22:22.59761283 +0000 UTC m=+0.221111817 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, version=17.1.13, batch=17.1_20260112.1, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:22:22 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:22:22 localhost podman[84240]: 2026-02-20 08:22:22.740306095 +0000 UTC m=+0.375486819 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, container_name=metrics_qdr, architecture=x86_64, release=1766032510, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1) Feb 20 03:22:22 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:22:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:22:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:22:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:22:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:22:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:22:26 localhost podman[84335]: 2026-02-20 08:22:26.447913529 +0000 UTC m=+0.082253106 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, container_name=iscsid, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.5) Feb 20 03:22:26 localhost podman[84335]: 2026-02-20 08:22:26.461778465 +0000 UTC m=+0.096118042 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true) Feb 20 03:22:26 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:22:26 localhost systemd[1]: tmp-crun.RLONBX.mount: Deactivated successfully. Feb 20 03:22:26 localhost podman[84348]: 2026-02-20 08:22:26.513260533 +0000 UTC m=+0.136103177 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible) Feb 20 03:22:26 localhost podman[84342]: 2026-02-20 08:22:26.570107407 +0000 UTC m=+0.197112614 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, version=17.1.13, release=1766032510, distribution-scope=public, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1) Feb 20 03:22:26 localhost podman[84348]: 2026-02-20 08:22:26.597916232 +0000 UTC m=+0.220758916 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-type=git, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:22:26 localhost podman[84336]: 2026-02-20 08:22:26.608156231 +0000 UTC m=+0.237667337 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, release=1766032510, url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, container_name=ovn_controller, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:22:26 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:22:26 localhost podman[84342]: 2026-02-20 08:22:26.635597396 +0000 UTC m=+0.262602613 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.13, config_id=tripleo_step3, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1) Feb 20 03:22:26 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:22:26 localhost podman[84334]: 2026-02-20 08:22:26.550109804 +0000 UTC m=+0.188592953 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, release=1766032510, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 20 03:22:26 localhost podman[84336]: 2026-02-20 08:22:26.658646312 +0000 UTC m=+0.288157358 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container) Feb 20 03:22:26 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:22:26 localhost podman[84334]: 2026-02-20 08:22:26.68284884 +0000 UTC m=+0.321332009 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, vendor=Red Hat, Inc.) Feb 20 03:22:26 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:22:27 localhost systemd[1]: tmp-crun.o1PoIA.mount: Deactivated successfully. Feb 20 03:22:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:22:28 localhost podman[84446]: 2026-02-20 08:22:28.440619182 +0000 UTC m=+0.079146620 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:22:28 localhost podman[84446]: 2026-02-20 08:22:28.83299781 +0000 UTC m=+0.471525238 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git) Feb 20 03:22:28 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:22:29 localhost systemd[1]: tmp-crun.sKcgrv.mount: Deactivated successfully. Feb 20 03:22:29 localhost podman[84569]: 2026-02-20 08:22:29.911965325 +0000 UTC m=+0.103944324 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, io.buildah.version=1.42.2, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, release=1770267347, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph) Feb 20 03:22:30 localhost podman[84569]: 2026-02-20 08:22:30.036027036 +0000 UTC m=+0.228006065 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.buildah.version=1.42.2, architecture=x86_64, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, version=7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:22:31 localhost sshd[84681]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:22:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:22:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:22:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:22:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:22:53 localhost systemd[1]: tmp-crun.lBFJBo.mount: Deactivated successfully. Feb 20 03:22:53 localhost podman[84760]: 2026-02-20 08:22:53.526448648 +0000 UTC m=+0.155037282 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.13, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5) Feb 20 03:22:53 localhost podman[84757]: 2026-02-20 08:22:53.490517152 +0000 UTC m=+0.121053769 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vcs-type=git, release=1766032510, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:22:53 localhost podman[84760]: 2026-02-20 08:22:53.543707167 +0000 UTC m=+0.172295831 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:22:53 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:22:53 localhost podman[84758]: 2026-02-20 08:22:53.505754386 +0000 UTC m=+0.137050113 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, version=17.1.13) Feb 20 03:22:53 localhost podman[84758]: 2026-02-20 08:22:53.587683421 +0000 UTC m=+0.218979158 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.5, url=https://www.redhat.com, config_id=tripleo_step4, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-type=git, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:22:53 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:22:53 localhost podman[84759]: 2026-02-20 08:22:53.457667149 +0000 UTC m=+0.088231717 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container) Feb 20 03:22:53 localhost podman[84759]: 2026-02-20 08:22:53.640774033 +0000 UTC m=+0.271338651 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, release=1766032510, architecture=x86_64, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, version=17.1.13, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com) Feb 20 03:22:53 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:22:53 localhost podman[84757]: 2026-02-20 08:22:53.670635834 +0000 UTC m=+0.301172451 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, version=17.1.13, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:22:53 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:22:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:22:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:22:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:22:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:22:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:22:57 localhost systemd[1]: tmp-crun.SrDB7x.mount: Deactivated successfully. Feb 20 03:22:57 localhost podman[84856]: 2026-02-20 08:22:57.41592813 +0000 UTC m=+0.058774248 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, url=https://www.redhat.com, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:22:57 localhost podman[84857]: 2026-02-20 08:22:57.466961087 +0000 UTC m=+0.107346868 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, release=1766032510, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:22:57 localhost podman[84856]: 2026-02-20 08:22:57.489996582 +0000 UTC m=+0.132842720 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, config_id=tripleo_step4, url=https://www.redhat.com, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:22:57 localhost podman[84863]: 2026-02-20 08:22:57.497116645 +0000 UTC m=+0.131306558 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-type=git, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:22:57 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:22:57 localhost podman[84863]: 2026-02-20 08:22:57.525681901 +0000 UTC m=+0.159871794 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.buildah.version=1.41.5, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Feb 20 03:22:57 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:22:57 localhost podman[84854]: 2026-02-20 08:22:57.579180044 +0000 UTC m=+0.223829680 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, version=17.1.13, distribution-scope=public, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 20 03:22:57 localhost podman[84857]: 2026-02-20 08:22:57.598915761 +0000 UTC m=+0.239301572 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., architecture=x86_64, container_name=collectd, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:22:57 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:22:57 localhost podman[84855]: 2026-02-20 08:22:57.602744154 +0000 UTC m=+0.242935540 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, container_name=iscsid, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 20 03:22:57 localhost podman[84854]: 2026-02-20 08:22:57.66263428 +0000 UTC m=+0.307283906 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.13, build-date=2026-01-12T22:56:19Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Feb 20 03:22:57 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:22:57 localhost podman[84855]: 2026-02-20 08:22:57.684734081 +0000 UTC m=+0.324925467 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, version=17.1.13) Feb 20 03:22:57 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:22:58 localhost systemd[1]: tmp-crun.5qCLJy.mount: Deactivated successfully. Feb 20 03:22:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:22:59 localhost systemd[1]: tmp-crun.KhFYSz.mount: Deactivated successfully. Feb 20 03:22:59 localhost podman[84964]: 2026-02-20 08:22:59.448299602 +0000 UTC m=+0.084866646 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_migration_target, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:22:59 localhost podman[84964]: 2026-02-20 08:22:59.854053672 +0000 UTC m=+0.490620756 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:22:59 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:23:14 localhost sshd[84987]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:23:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:23:15 localhost recover_tripleo_nova_virtqemud[84990]: 63703 Feb 20 03:23:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:23:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:23:21 localhost sshd[84991]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:23:23 localhost sshd[84993]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:23:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:23:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:23:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:23:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:23:24 localhost podman[84995]: 2026-02-20 08:23:24.441760366 +0000 UTC m=+0.072585623 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, build-date=2026-01-12T23:07:47Z, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:23:24 localhost podman[84994]: 2026-02-20 08:23:24.491232509 +0000 UTC m=+0.121767728 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, build-date=2026-01-12T22:10:14Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:23:24 localhost podman[84995]: 2026-02-20 08:23:24.494977291 +0000 UTC m=+0.125802568 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:23:24 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:23:24 localhost podman[84996]: 2026-02-20 08:23:24.463016472 +0000 UTC m=+0.088111793 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, release=1766032510, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Feb 20 03:23:24 localhost podman[84996]: 2026-02-20 08:23:24.542252955 +0000 UTC m=+0.167348236 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1) Feb 20 03:23:24 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:23:24 localhost podman[84997]: 2026-02-20 08:23:24.565728712 +0000 UTC m=+0.191549003 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:23:24 localhost podman[84997]: 2026-02-20 08:23:24.616489961 +0000 UTC m=+0.242310292 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, batch=17.1_20260112.1, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:23:24 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:23:24 localhost podman[84994]: 2026-02-20 08:23:24.661929865 +0000 UTC m=+0.292465034 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-type=git, container_name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Feb 20 03:23:24 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:23:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:23:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:23:27 localhost systemd[1]: tmp-crun.aLLpbK.mount: Deactivated successfully. Feb 20 03:23:27 localhost podman[85095]: 2026-02-20 08:23:27.630455433 +0000 UTC m=+0.084636820 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, release=1766032510, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.5, batch=17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:23:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:23:27 localhost podman[85095]: 2026-02-20 08:23:27.659105571 +0000 UTC m=+0.113286958 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git) Feb 20 03:23:27 localhost systemd[1]: tmp-crun.Vuk6hs.mount: Deactivated successfully. Feb 20 03:23:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:23:27 localhost podman[85096]: 2026-02-20 08:23:27.700434313 +0000 UTC m=+0.150026146 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, version=17.1.13, batch=17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:23:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:23:27 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:23:27 localhost podman[85096]: 2026-02-20 08:23:27.730370887 +0000 UTC m=+0.179962690 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:23:27 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:23:27 localhost podman[85146]: 2026-02-20 08:23:27.791219279 +0000 UTC m=+0.077417223 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, container_name=iscsid) Feb 20 03:23:27 localhost podman[85146]: 2026-02-20 08:23:27.801376405 +0000 UTC m=+0.087574339 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.buildah.version=1.41.5, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, url=https://www.redhat.com) Feb 20 03:23:27 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:23:27 localhost podman[85145]: 2026-02-20 08:23:27.853666496 +0000 UTC m=+0.140227050 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, config_id=tripleo_step4, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, architecture=x86_64, io.buildah.version=1.41.5, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 20 03:23:27 localhost podman[85127]: 2026-02-20 08:23:27.903372355 +0000 UTC m=+0.244780809 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, build-date=2026-01-12T22:10:15Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.expose-services=, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, config_id=tripleo_step3, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 20 03:23:27 localhost podman[85127]: 2026-02-20 08:23:27.913056219 +0000 UTC m=+0.254464643 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:23:27 localhost podman[85145]: 2026-02-20 08:23:27.925316651 +0000 UTC m=+0.211877225 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:23:27 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:23:27 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:23:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:23:30 localhost podman[85207]: 2026-02-20 08:23:30.41854494 +0000 UTC m=+0.061882642 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, tcib_managed=true, vcs-type=git, release=1766032510, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc.) Feb 20 03:23:30 localhost podman[85207]: 2026-02-20 08:23:30.787423839 +0000 UTC m=+0.430761571 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:23:30 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:23:44 localhost sshd[85307]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:23:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:23:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:23:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:23:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:23:55 localhost podman[85357]: 2026-02-20 08:23:55.462070393 +0000 UTC m=+0.092655557 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64) Feb 20 03:23:55 localhost systemd[1]: tmp-crun.LQaRny.mount: Deactivated successfully. Feb 20 03:23:55 localhost podman[85357]: 2026-02-20 08:23:55.522925606 +0000 UTC m=+0.153510730 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:23:55 localhost podman[85358]: 2026-02-20 08:23:55.560770184 +0000 UTC m=+0.184665507 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, architecture=x86_64, release=1766032510, io.openshift.expose-services=, io.buildah.version=1.41.5) Feb 20 03:23:55 localhost podman[85356]: 2026-02-20 08:23:55.52379249 +0000 UTC m=+0.155888476 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, batch=17.1_20260112.1, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-type=git) Feb 20 03:23:55 localhost podman[85358]: 2026-02-20 08:23:55.62505221 +0000 UTC m=+0.248947563 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.expose-services=, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:23:55 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:23:55 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:23:55 localhost podman[85359]: 2026-02-20 08:23:55.663837843 +0000 UTC m=+0.286228265 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., version=17.1.13, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, tcib_managed=true, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:23:55 localhost podman[85359]: 2026-02-20 08:23:55.692772239 +0000 UTC m=+0.315162681 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 20 03:23:55 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:23:55 localhost podman[85356]: 2026-02-20 08:23:55.716688929 +0000 UTC m=+0.348784905 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, container_name=metrics_qdr, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:23:55 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:23:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:23:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:23:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:23:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:23:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:23:58 localhost podman[85456]: 2026-02-20 08:23:58.461815329 +0000 UTC m=+0.096338859 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, release=1766032510, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:23:58 localhost podman[85456]: 2026-02-20 08:23:58.504755145 +0000 UTC m=+0.139278595 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, release=1766032510) Feb 20 03:23:58 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:23:58 localhost podman[85464]: 2026-02-20 08:23:58.516437552 +0000 UTC m=+0.139072538 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, vcs-type=git, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.41.5) Feb 20 03:23:58 localhost podman[85464]: 2026-02-20 08:23:58.529636311 +0000 UTC m=+0.152271317 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=collectd, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc.) Feb 20 03:23:58 localhost podman[85470]: 2026-02-20 08:23:58.4854345 +0000 UTC m=+0.103142242 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, release=1766032510, vcs-type=git, managed_by=tripleo_ansible) Feb 20 03:23:58 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:23:58 localhost podman[85470]: 2026-02-20 08:23:58.567820028 +0000 UTC m=+0.185527820 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:23:58 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:23:58 localhost podman[85457]: 2026-02-20 08:23:58.621759813 +0000 UTC m=+0.252386416 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, tcib_managed=true) Feb 20 03:23:58 localhost podman[85457]: 2026-02-20 08:23:58.631050805 +0000 UTC m=+0.261677358 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, tcib_managed=true) Feb 20 03:23:58 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:23:58 localhost podman[85458]: 2026-02-20 08:23:58.723979909 +0000 UTC m=+0.349323508 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:23:58 localhost podman[85458]: 2026-02-20 08:23:58.752906565 +0000 UTC m=+0.378250164 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 20 03:23:58 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:24:00 localhost sshd[85568]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:24:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:24:01 localhost systemd[1]: tmp-crun.vhC0ib.mount: Deactivated successfully. Feb 20 03:24:01 localhost podman[85570]: 2026-02-20 08:24:01.431657483 +0000 UTC m=+0.080340324 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, release=1766032510, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 20 03:24:01 localhost podman[85570]: 2026-02-20 08:24:01.842768989 +0000 UTC m=+0.491451870 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, build-date=2026-01-12T23:32:04Z, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:24:01 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:24:22 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:24:22 localhost recover_tripleo_nova_virtqemud[85592]: 63703 Feb 20 03:24:22 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:24:22 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:24:22 localhost sshd[85593]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:24:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:24:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:24:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:24:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:24:26 localhost podman[85595]: 2026-02-20 08:24:26.457437335 +0000 UTC m=+0.093667274 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, release=1766032510, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible) Feb 20 03:24:26 localhost podman[85596]: 2026-02-20 08:24:26.501652696 +0000 UTC m=+0.133937539 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64) Feb 20 03:24:26 localhost podman[85601]: 2026-02-20 08:24:26.557981196 +0000 UTC m=+0.183758992 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, release=1766032510) Feb 20 03:24:26 localhost podman[85596]: 2026-02-20 08:24:26.609945698 +0000 UTC m=+0.242230471 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, batch=17.1_20260112.1, vendor=Red Hat, Inc.) Feb 20 03:24:26 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:24:26 localhost podman[85601]: 2026-02-20 08:24:26.640113927 +0000 UTC m=+0.265891663 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1766032510, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64) Feb 20 03:24:26 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:24:26 localhost podman[85595]: 2026-02-20 08:24:26.66268539 +0000 UTC m=+0.298915259 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:24:26 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:24:26 localhost podman[85597]: 2026-02-20 08:24:26.613051862 +0000 UTC m=+0.243128994 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1766032510, batch=17.1_20260112.1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, architecture=x86_64, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team) Feb 20 03:24:26 localhost podman[85597]: 2026-02-20 08:24:26.745965242 +0000 UTC m=+0.376042354 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, release=1766032510, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container) Feb 20 03:24:26 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:24:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:24:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:24:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:24:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:24:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:24:29 localhost systemd[1]: tmp-crun.V1aioz.mount: Deactivated successfully. Feb 20 03:24:29 localhost podman[85700]: 2026-02-20 08:24:29.459471614 +0000 UTC m=+0.089306437 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, container_name=ovn_controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:24:29 localhost systemd[1]: tmp-crun.GIM0WQ.mount: Deactivated successfully. Feb 20 03:24:29 localhost podman[85700]: 2026-02-20 08:24:29.50573267 +0000 UTC m=+0.135567523 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:24:29 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:24:29 localhost podman[85699]: 2026-02-20 08:24:29.553951571 +0000 UTC m=+0.186309282 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, config_id=tripleo_step3, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 20 03:24:29 localhost podman[85699]: 2026-02-20 08:24:29.565609267 +0000 UTC m=+0.197966978 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.13, io.openshift.expose-services=, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, distribution-scope=public, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 20 03:24:29 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:24:29 localhost podman[85698]: 2026-02-20 08:24:29.505616917 +0000 UTC m=+0.141220916 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., distribution-scope=public) Feb 20 03:24:29 localhost podman[85701]: 2026-02-20 08:24:29.618222035 +0000 UTC m=+0.243413361 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, container_name=collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, version=17.1.13, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:24:29 localhost podman[85701]: 2026-02-20 08:24:29.629720669 +0000 UTC m=+0.254911965 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., container_name=collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, version=17.1.13, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:24:29 localhost podman[85698]: 2026-02-20 08:24:29.639944906 +0000 UTC m=+0.275548975 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 20 03:24:29 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:24:29 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:24:29 localhost podman[85707]: 2026-02-20 08:24:29.718614872 +0000 UTC m=+0.339999335 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, version=17.1.13) Feb 20 03:24:29 localhost podman[85707]: 2026-02-20 08:24:29.776792453 +0000 UTC m=+0.398176906 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20260112.1, config_id=tripleo_step5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container) Feb 20 03:24:29 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:24:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:24:32 localhost systemd[1]: tmp-crun.cCye8U.mount: Deactivated successfully. Feb 20 03:24:32 localhost podman[85805]: 2026-02-20 08:24:32.439605747 +0000 UTC m=+0.079948263 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, url=https://www.redhat.com, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Feb 20 03:24:32 localhost podman[85805]: 2026-02-20 08:24:32.843420185 +0000 UTC m=+0.483762661 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:24:32 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:24:34 localhost sshd[85889]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:24:38 localhost sshd[85906]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:24:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:24:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:24:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:24:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:24:57 localhost systemd[1]: tmp-crun.Fb1PTy.mount: Deactivated successfully. Feb 20 03:24:57 localhost podman[85953]: 2026-02-20 08:24:57.463609621 +0000 UTC m=+0.100462149 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.13, build-date=2026-01-12T22:10:14Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com) Feb 20 03:24:57 localhost podman[85954]: 2026-02-20 08:24:57.512198671 +0000 UTC m=+0.146281804 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, url=https://www.redhat.com) Feb 20 03:24:57 localhost podman[85955]: 2026-02-20 08:24:57.559102495 +0000 UTC m=+0.189551909 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true) Feb 20 03:24:57 localhost podman[85954]: 2026-02-20 08:24:57.573268859 +0000 UTC m=+0.207351972 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:24:57 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:24:57 localhost podman[85955]: 2026-02-20 08:24:57.597820226 +0000 UTC m=+0.228269600 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, release=1766032510, distribution-scope=public, container_name=logrotate_crond, tcib_managed=true, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64) Feb 20 03:24:57 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:24:57 localhost podman[85956]: 2026-02-20 08:24:57.656203442 +0000 UTC m=+0.284004704 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:24:57 localhost podman[85953]: 2026-02-20 08:24:57.668936339 +0000 UTC m=+0.305788887 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true) Feb 20 03:24:57 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:24:57 localhost podman[85956]: 2026-02-20 08:24:57.711731011 +0000 UTC m=+0.339532243 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1766032510, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:24:57 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:25:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:25:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:25:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:25:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:25:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:25:00 localhost podman[86058]: 2026-02-20 08:25:00.462375701 +0000 UTC m=+0.087466517 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, batch=17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team) Feb 20 03:25:00 localhost systemd[1]: tmp-crun.tnncoQ.mount: Deactivated successfully. Feb 20 03:25:00 localhost podman[86054]: 2026-02-20 08:25:00.512090051 +0000 UTC m=+0.140374975 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1766032510, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc.) Feb 20 03:25:00 localhost podman[86053]: 2026-02-20 08:25:00.518540216 +0000 UTC m=+0.150222052 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, batch=17.1_20260112.1, managed_by=tripleo_ansible) Feb 20 03:25:00 localhost podman[86052]: 2026-02-20 08:25:00.560408413 +0000 UTC m=+0.193942238 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, distribution-scope=public, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1766032510, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git) Feb 20 03:25:00 localhost podman[86058]: 2026-02-20 08:25:00.564620117 +0000 UTC m=+0.189710953 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, tcib_managed=true) Feb 20 03:25:00 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:25:00 localhost podman[86054]: 2026-02-20 08:25:00.575686058 +0000 UTC m=+0.203970942 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, vcs-type=git, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Feb 20 03:25:00 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:25:00 localhost podman[86053]: 2026-02-20 08:25:00.616275431 +0000 UTC m=+0.247957267 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, container_name=ovn_controller, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true) Feb 20 03:25:00 localhost podman[86051]: 2026-02-20 08:25:00.626103537 +0000 UTC m=+0.260068375 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.13) Feb 20 03:25:00 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:25:00 localhost podman[86052]: 2026-02-20 08:25:00.646771109 +0000 UTC m=+0.280304924 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, release=1766032510, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, architecture=x86_64) Feb 20 03:25:00 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:25:00 localhost podman[86051]: 2026-02-20 08:25:00.670875533 +0000 UTC m=+0.304840321 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T22:56:19Z, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Feb 20 03:25:00 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:25:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:25:03 localhost systemd[1]: tmp-crun.C2LLbM.mount: Deactivated successfully. Feb 20 03:25:03 localhost podman[86161]: 2026-02-20 08:25:03.448508247 +0000 UTC m=+0.086743497 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-type=git, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, distribution-scope=public) Feb 20 03:25:03 localhost podman[86161]: 2026-02-20 08:25:03.814919429 +0000 UTC m=+0.453154669 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:25:03 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:25:09 localhost sshd[86183]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:25:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:25:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:25:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:25:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:25:28 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:25:28 localhost recover_tripleo_nova_virtqemud[86205]: 63703 Feb 20 03:25:28 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:25:28 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:25:28 localhost podman[86185]: 2026-02-20 08:25:28.45141289 +0000 UTC m=+0.090183591 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible) Feb 20 03:25:28 localhost podman[86193]: 2026-02-20 08:25:28.504925553 +0000 UTC m=+0.133055114 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20260112.1) Feb 20 03:25:28 localhost podman[86187]: 2026-02-20 08:25:28.557359227 +0000 UTC m=+0.188421399 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, name=rhosp-rhel9/openstack-cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510) Feb 20 03:25:28 localhost podman[86193]: 2026-02-20 08:25:28.562856117 +0000 UTC m=+0.190985728 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, distribution-scope=public, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.13) Feb 20 03:25:28 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:25:28 localhost podman[86186]: 2026-02-20 08:25:28.618969211 +0000 UTC m=+0.254407501 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:25:28 localhost podman[86185]: 2026-02-20 08:25:28.639684403 +0000 UTC m=+0.278455084 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, container_name=metrics_qdr, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:25:28 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:25:28 localhost podman[86186]: 2026-02-20 08:25:28.690688148 +0000 UTC m=+0.326126368 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, release=1766032510, tcib_managed=true, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git) Feb 20 03:25:28 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:25:28 localhost podman[86187]: 2026-02-20 08:25:28.744031947 +0000 UTC m=+0.375094079 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.13, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible) Feb 20 03:25:28 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:25:29 localhost systemd[1]: tmp-crun.RzSeGO.mount: Deactivated successfully. Feb 20 03:25:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:25:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:25:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:25:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:25:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:25:31 localhost systemd[1]: tmp-crun.4qaMaE.mount: Deactivated successfully. Feb 20 03:25:31 localhost podman[86281]: 2026-02-20 08:25:31.458195496 +0000 UTC m=+0.094135337 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, distribution-scope=public, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, container_name=ovn_metadata_agent, config_id=tripleo_step4, release=1766032510) Feb 20 03:25:31 localhost systemd[1]: tmp-crun.8EnkpP.mount: Deactivated successfully. Feb 20 03:25:31 localhost podman[86283]: 2026-02-20 08:25:31.509079488 +0000 UTC m=+0.138576464 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, config_id=tripleo_step4, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container) Feb 20 03:25:31 localhost podman[86284]: 2026-02-20 08:25:31.555331585 +0000 UTC m=+0.180924805 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, release=1766032510, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.5, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:25:31 localhost podman[86283]: 2026-02-20 08:25:31.560071054 +0000 UTC m=+0.189568050 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.5, container_name=ovn_controller, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:25:31 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:25:31 localhost podman[86282]: 2026-02-20 08:25:31.600398149 +0000 UTC m=+0.235098357 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-iscsid-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, architecture=x86_64) Feb 20 03:25:31 localhost podman[86282]: 2026-02-20 08:25:31.608644563 +0000 UTC m=+0.243344791 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, container_name=iscsid, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, release=1766032510, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 20 03:25:31 localhost podman[86284]: 2026-02-20 08:25:31.614884042 +0000 UTC m=+0.240477302 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git) Feb 20 03:25:31 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:25:31 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:25:31 localhost podman[86295]: 2026-02-20 08:25:31.660978955 +0000 UTC m=+0.282316409 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:25:31 localhost podman[86281]: 2026-02-20 08:25:31.681254195 +0000 UTC m=+0.317194026 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Feb 20 03:25:31 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:25:31 localhost podman[86295]: 2026-02-20 08:25:31.713516421 +0000 UTC m=+0.334853815 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.buildah.version=1.41.5, vcs-type=git, version=17.1.13, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, container_name=nova_compute, batch=17.1_20260112.1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:25:31 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:25:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:25:34 localhost systemd[1]: tmp-crun.w3aGdq.mount: Deactivated successfully. Feb 20 03:25:34 localhost podman[86392]: 2026-02-20 08:25:34.438351681 +0000 UTC m=+0.080487087 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, release=1766032510, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13) Feb 20 03:25:34 localhost podman[86392]: 2026-02-20 08:25:34.769545907 +0000 UTC m=+0.411681283 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, container_name=nova_migration_target, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:25:34 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:25:41 localhost sshd[86493]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:25:46 localhost sshd[86495]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:25:55 localhost sshd[86542]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:25:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:25:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:25:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:25:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:25:59 localhost systemd[1]: tmp-crun.GIULHq.mount: Deactivated successfully. Feb 20 03:25:59 localhost podman[86545]: 2026-02-20 08:25:59.197092511 +0000 UTC m=+0.082767699 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, tcib_managed=true) Feb 20 03:25:59 localhost systemd[1]: tmp-crun.Xj7tOr.mount: Deactivated successfully. Feb 20 03:25:59 localhost podman[86546]: 2026-02-20 08:25:59.205821627 +0000 UTC m=+0.085310847 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:25:59 localhost podman[86546]: 2026-02-20 08:25:59.242882404 +0000 UTC m=+0.122371614 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, tcib_managed=true, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:25:59 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:25:59 localhost podman[86547]: 2026-02-20 08:25:59.295863083 +0000 UTC m=+0.175677952 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2026-01-12T23:07:30Z, release=1766032510, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible) Feb 20 03:25:59 localhost podman[86545]: 2026-02-20 08:25:59.299305707 +0000 UTC m=+0.184980895 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:25:59 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:25:59 localhost podman[86547]: 2026-02-20 08:25:59.377725767 +0000 UTC m=+0.257540596 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, io.openshift.expose-services=, version=17.1.13, container_name=ceilometer_agent_ipmi, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:25:59 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:25:59 localhost podman[86544]: 2026-02-20 08:25:59.392431636 +0000 UTC m=+0.277437726 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, tcib_managed=true, build-date=2026-01-12T22:10:14Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 03:25:59 localhost podman[86544]: 2026-02-20 08:25:59.577862723 +0000 UTC m=+0.462868813 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true) Feb 20 03:25:59 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:26:00 localhost systemd[1]: tmp-crun.6rR3wN.mount: Deactivated successfully. Feb 20 03:26:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:26:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:26:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:26:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:26:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:26:02 localhost podman[86643]: 2026-02-20 08:26:02.460699234 +0000 UTC m=+0.091889607 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, container_name=ovn_metadata_agent, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:26:02 localhost systemd[1]: tmp-crun.TPGIdQ.mount: Deactivated successfully. Feb 20 03:26:02 localhost podman[86643]: 2026-02-20 08:26:02.526846801 +0000 UTC m=+0.158037214 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, io.openshift.expose-services=, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:26:02 localhost podman[86653]: 2026-02-20 08:26:02.48265353 +0000 UTC m=+0.097986812 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, version=17.1.13, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:26:02 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:26:02 localhost podman[86653]: 2026-02-20 08:26:02.566808956 +0000 UTC m=+0.182142188 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, architecture=x86_64, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:26:02 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:26:02 localhost podman[86652]: 2026-02-20 08:26:02.531081515 +0000 UTC m=+0.148848884 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:26:02 localhost podman[86652]: 2026-02-20 08:26:02.613783232 +0000 UTC m=+0.231550631 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3) Feb 20 03:26:02 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:26:02 localhost podman[86644]: 2026-02-20 08:26:02.568843491 +0000 UTC m=+0.194737360 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step3, container_name=iscsid, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 20 03:26:02 localhost podman[86644]: 2026-02-20 08:26:02.704818774 +0000 UTC m=+0.330712653 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, release=1766032510, vcs-type=git, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true) Feb 20 03:26:02 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:26:02 localhost podman[86645]: 2026-02-20 08:26:02.673719029 +0000 UTC m=+0.295960139 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1766032510, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:26:02 localhost podman[86645]: 2026-02-20 08:26:02.754296468 +0000 UTC m=+0.376537578 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:26:02 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:26:03 localhost systemd[1]: tmp-crun.WjiETV.mount: Deactivated successfully. Feb 20 03:26:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:26:05 localhost podman[86753]: 2026-02-20 08:26:05.442498793 +0000 UTC m=+0.081312501 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1766032510, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 20 03:26:05 localhost podman[86753]: 2026-02-20 08:26:05.846969018 +0000 UTC m=+0.485782686 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, release=1766032510, version=17.1.13, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 20 03:26:05 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:26:22 localhost sshd[86777]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:26:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:26:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:26:29 localhost podman[86779]: 2026-02-20 08:26:29.42104979 +0000 UTC m=+0.064209335 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, version=17.1.13, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:26:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:26:29 localhost podman[86779]: 2026-02-20 08:26:29.472700212 +0000 UTC m=+0.115859657 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, distribution-scope=public, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Feb 20 03:26:29 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:26:29 localhost podman[86807]: 2026-02-20 08:26:29.530220825 +0000 UTC m=+0.086422548 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2026-01-12T23:07:30Z, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, managed_by=tripleo_ansible) Feb 20 03:26:29 localhost podman[86780]: 2026-02-20 08:26:29.495974405 +0000 UTC m=+0.133627951 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond) Feb 20 03:26:29 localhost podman[86780]: 2026-02-20 08:26:29.580955713 +0000 UTC m=+0.218609259 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, container_name=logrotate_crond, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:26:29 localhost podman[86807]: 2026-02-20 08:26:29.61986165 +0000 UTC m=+0.176063343 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, release=1766032510, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, distribution-scope=public) Feb 20 03:26:29 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:26:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:26:29 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:26:29 localhost podman[86847]: 2026-02-20 08:26:29.724153142 +0000 UTC m=+0.077403562 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1766032510, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 03:26:29 localhost podman[86847]: 2026-02-20 08:26:29.926145128 +0000 UTC m=+0.279395498 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, release=1766032510, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, build-date=2026-01-12T22:10:14Z, vcs-type=git, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public) Feb 20 03:26:29 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:26:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:26:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:26:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:26:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:26:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:26:33 localhost podman[86876]: 2026-02-20 08:26:33.443232107 +0000 UTC m=+0.080327754 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid) Feb 20 03:26:33 localhost podman[86876]: 2026-02-20 08:26:33.457793232 +0000 UTC m=+0.094888849 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=iscsid, release=1766032510, architecture=x86_64, distribution-scope=public, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true) Feb 20 03:26:33 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:26:33 localhost podman[86875]: 2026-02-20 08:26:33.506288789 +0000 UTC m=+0.146601443 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20260112.1, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:26:33 localhost podman[86882]: 2026-02-20 08:26:33.562082604 +0000 UTC m=+0.193121637 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_id=tripleo_step3, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, batch=17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=) Feb 20 03:26:33 localhost podman[86882]: 2026-02-20 08:26:33.573806403 +0000 UTC m=+0.204845416 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, version=17.1.13, release=1766032510, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:26:33 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:26:33 localhost podman[86877]: 2026-02-20 08:26:33.61713449 +0000 UTC m=+0.249597601 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 20 03:26:33 localhost podman[86875]: 2026-02-20 08:26:33.624947142 +0000 UTC m=+0.265259726 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public) Feb 20 03:26:33 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:26:33 localhost podman[86877]: 2026-02-20 08:26:33.669704108 +0000 UTC m=+0.302167239 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:26:33 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:26:33 localhost podman[86889]: 2026-02-20 08:26:33.716760165 +0000 UTC m=+0.341874377 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, release=1766032510, distribution-scope=public, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 20 03:26:33 localhost podman[86889]: 2026-02-20 08:26:33.748729514 +0000 UTC m=+0.373843716 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com) Feb 20 03:26:33 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:26:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:26:36 localhost systemd[1]: tmp-crun.sqI5HL.mount: Deactivated successfully. Feb 20 03:26:36 localhost podman[86986]: 2026-02-20 08:26:36.443767164 +0000 UTC m=+0.078731089 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, vcs-type=git, container_name=nova_migration_target, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13) Feb 20 03:26:36 localhost podman[86986]: 2026-02-20 08:26:36.867952325 +0000 UTC m=+0.502916250 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, container_name=nova_migration_target, vcs-type=git, com.redhat.component=openstack-nova-compute-container, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:26:36 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:26:46 localhost sshd[87085]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:26:57 localhost sshd[87132]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:27:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:27:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:27:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:27:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:27:00 localhost podman[87134]: 2026-02-20 08:27:00.456881964 +0000 UTC m=+0.093469373 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, container_name=metrics_qdr, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-type=git, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64) Feb 20 03:27:00 localhost podman[87136]: 2026-02-20 08:27:00.505200592 +0000 UTC m=+0.139864969 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:27:00 localhost podman[87136]: 2026-02-20 08:27:00.545104646 +0000 UTC m=+0.179768983 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, tcib_managed=true, distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:27:00 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:27:00 localhost podman[87135]: 2026-02-20 08:27:00.568097638 +0000 UTC m=+0.204406628 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, release=1766032510) Feb 20 03:27:00 localhost podman[87137]: 2026-02-20 08:27:00.624255316 +0000 UTC m=+0.257730691 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., vcs-type=git, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:27:00 localhost podman[87135]: 2026-02-20 08:27:00.629362871 +0000 UTC m=+0.265671861 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:27:00 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:27:00 localhost podman[87137]: 2026-02-20 08:27:00.66271408 +0000 UTC m=+0.296189505 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:27:00 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:27:00 localhost podman[87134]: 2026-02-20 08:27:00.680888885 +0000 UTC m=+0.317476244 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1766032510, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true, maintainer=OpenStack TripleO Team) Feb 20 03:27:00 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:27:01 localhost sshd[87238]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:27:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:27:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:27:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:27:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:27:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:27:04 localhost podman[87241]: 2026-02-20 08:27:04.4525799 +0000 UTC m=+0.088928542 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 20 03:27:04 localhost podman[87240]: 2026-02-20 08:27:04.513077983 +0000 UTC m=+0.149820755 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true) Feb 20 03:27:04 localhost systemd[1]: tmp-crun.lM8oJj.mount: Deactivated successfully. Feb 20 03:27:04 localhost podman[87243]: 2026-02-20 08:27:04.584354652 +0000 UTC m=+0.212500735 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, vcs-type=git, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, release=1766032510, version=17.1.13, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com) Feb 20 03:27:04 localhost podman[87254]: 2026-02-20 08:27:04.623832294 +0000 UTC m=+0.247718383 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, container_name=nova_compute, release=1766032510, build-date=2026-01-12T23:32:04Z, version=17.1.13, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Feb 20 03:27:04 localhost podman[87242]: 2026-02-20 08:27:04.670003575 +0000 UTC m=+0.302046832 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, distribution-scope=public, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-type=git, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller) Feb 20 03:27:04 localhost podman[87243]: 2026-02-20 08:27:04.677001782 +0000 UTC m=+0.305147895 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team) Feb 20 03:27:04 localhost podman[87240]: 2026-02-20 08:27:04.68742823 +0000 UTC m=+0.324171042 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, tcib_managed=true, version=17.1.13, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 20 03:27:04 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:27:04 localhost podman[87242]: 2026-02-20 08:27:04.698812793 +0000 UTC m=+0.330856060 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.13, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Feb 20 03:27:04 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:27:04 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:27:04 localhost podman[87254]: 2026-02-20 08:27:04.732109111 +0000 UTC m=+0.355995250 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, architecture=x86_64) Feb 20 03:27:04 localhost podman[87241]: 2026-02-20 08:27:04.74186817 +0000 UTC m=+0.378216862 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, architecture=x86_64, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 20 03:27:04 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:27:04 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:27:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:27:07 localhost podman[87353]: 2026-02-20 08:27:07.442508097 +0000 UTC m=+0.081390191 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, vcs-type=git, batch=17.1_20260112.1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:27:07 localhost podman[87353]: 2026-02-20 08:27:07.809076218 +0000 UTC m=+0.447958312 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64) Feb 20 03:27:07 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:27:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:27:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:27:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:27:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:27:31 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:27:31 localhost recover_tripleo_nova_virtqemud[87401]: 63703 Feb 20 03:27:31 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:27:31 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:27:31 localhost podman[87377]: 2026-02-20 08:27:31.454115537 +0000 UTC m=+0.090188835 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vcs-type=git, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 20 03:27:31 localhost systemd[1]: tmp-crun.ksAlIp.mount: Deactivated successfully. Feb 20 03:27:31 localhost podman[87378]: 2026-02-20 08:27:31.50747131 +0000 UTC m=+0.142234503 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron) Feb 20 03:27:31 localhost podman[87378]: 2026-02-20 08:27:31.511960009 +0000 UTC m=+0.146723222 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, release=1766032510, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Feb 20 03:27:31 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:27:31 localhost podman[87376]: 2026-02-20 08:27:31.547596079 +0000 UTC m=+0.186711298 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T22:10:14Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:27:31 localhost podman[87377]: 2026-02-20 08:27:31.55965839 +0000 UTC m=+0.195731657 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:27:31 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:27:31 localhost podman[87379]: 2026-02-20 08:27:31.609951751 +0000 UTC m=+0.240113631 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, version=17.1.13, architecture=x86_64, distribution-scope=public, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:27:31 localhost podman[87379]: 2026-02-20 08:27:31.645215991 +0000 UTC m=+0.275377881 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true) Feb 20 03:27:31 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:27:31 localhost podman[87376]: 2026-02-20 08:27:31.740808599 +0000 UTC m=+0.379923828 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, url=https://www.redhat.com, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible) Feb 20 03:27:31 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:27:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:27:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:27:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:27:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:27:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:27:35 localhost systemd[1]: tmp-crun.zOF5Es.mount: Deactivated successfully. Feb 20 03:27:35 localhost podman[87480]: 2026-02-20 08:27:35.454456297 +0000 UTC m=+0.094214863 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.13, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:27:35 localhost systemd[1]: tmp-crun.m45Tid.mount: Deactivated successfully. Feb 20 03:27:35 localhost podman[87482]: 2026-02-20 08:27:35.513129951 +0000 UTC m=+0.147344709 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, architecture=x86_64, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team) Feb 20 03:27:35 localhost podman[87481]: 2026-02-20 08:27:35.557046471 +0000 UTC m=+0.192062370 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, batch=17.1_20260112.1, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, vendor=Red Hat, Inc., container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team) Feb 20 03:27:35 localhost podman[87482]: 2026-02-20 08:27:35.565812284 +0000 UTC m=+0.200027012 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, io.buildah.version=1.41.5, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64) Feb 20 03:27:35 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:27:35 localhost podman[87489]: 2026-02-20 08:27:35.61026834 +0000 UTC m=+0.238152199 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, version=17.1.13, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:27:35 localhost podman[87481]: 2026-02-20 08:27:35.621934911 +0000 UTC m=+0.256950780 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container) Feb 20 03:27:35 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:27:35 localhost podman[87489]: 2026-02-20 08:27:35.643758742 +0000 UTC m=+0.271642571 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.13, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, batch=17.1_20260112.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:27:35 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:27:35 localhost podman[87480]: 2026-02-20 08:27:35.676827954 +0000 UTC m=+0.316586590 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, architecture=x86_64, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 20 03:27:35 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:27:35 localhost podman[87483]: 2026-02-20 08:27:35.766317859 +0000 UTC m=+0.396882380 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, version=17.1.13) Feb 20 03:27:35 localhost podman[87483]: 2026-02-20 08:27:35.806965113 +0000 UTC m=+0.437529604 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., release=1766032510, config_id=tripleo_step3, version=17.1.13, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public) Feb 20 03:27:35 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:27:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:27:38 localhost podman[87607]: 2026-02-20 08:27:38.401549452 +0000 UTC m=+0.082260404 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:27:38 localhost sshd[87645]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:27:38 localhost podman[87607]: 2026-02-20 08:27:38.8077458 +0000 UTC m=+0.488456732 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-type=git, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:27:38 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:27:43 localhost sshd[87697]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:28:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:28:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:28:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:28:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:28:02 localhost systemd[1]: tmp-crun.LHkbFh.mount: Deactivated successfully. Feb 20 03:28:02 localhost podman[87745]: 2026-02-20 08:28:02.516427946 +0000 UTC m=+0.150891763 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z) Feb 20 03:28:02 localhost podman[87747]: 2026-02-20 08:28:02.482464381 +0000 UTC m=+0.111241307 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1766032510, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, batch=17.1_20260112.1, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=) Feb 20 03:28:02 localhost podman[87746]: 2026-02-20 08:28:02.554303545 +0000 UTC m=+0.185608428 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64) Feb 20 03:28:02 localhost podman[87746]: 2026-02-20 08:28:02.56200871 +0000 UTC m=+0.193313583 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, io.openshift.expose-services=, version=17.1.13, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:28:02 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:28:02 localhost podman[87745]: 2026-02-20 08:28:02.602153511 +0000 UTC m=+0.236617398 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1766032510, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, version=17.1.13) Feb 20 03:28:02 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:28:02 localhost podman[87747]: 2026-02-20 08:28:02.614572772 +0000 UTC m=+0.243349688 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Feb 20 03:28:02 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:28:02 localhost podman[87744]: 2026-02-20 08:28:02.655898954 +0000 UTC m=+0.292252502 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, distribution-scope=public, tcib_managed=true, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, version=17.1.13, architecture=x86_64) Feb 20 03:28:02 localhost podman[87744]: 2026-02-20 08:28:02.846313869 +0000 UTC m=+0.482667607 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., release=1766032510, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=metrics_qdr, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, architecture=x86_64) Feb 20 03:28:02 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:28:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:28:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:28:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:28:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:28:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:28:06 localhost podman[87844]: 2026-02-20 08:28:06.446646756 +0000 UTC m=+0.087101863 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 20 03:28:06 localhost systemd[1]: tmp-crun.vBYyCK.mount: Deactivated successfully. Feb 20 03:28:06 localhost podman[87847]: 2026-02-20 08:28:06.469256299 +0000 UTC m=+0.100083419 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., container_name=collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:28:06 localhost podman[87846]: 2026-02-20 08:28:06.503690716 +0000 UTC m=+0.136830838 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.13, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, release=1766032510, tcib_managed=true) Feb 20 03:28:06 localhost podman[87844]: 2026-02-20 08:28:06.511754391 +0000 UTC m=+0.152209468 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, release=1766032510, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 20 03:28:06 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:28:06 localhost podman[87853]: 2026-02-20 08:28:06.547890965 +0000 UTC m=+0.176465145 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 20 03:28:06 localhost podman[87847]: 2026-02-20 08:28:06.551575372 +0000 UTC m=+0.182402522 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, container_name=collectd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, build-date=2026-01-12T22:10:15Z, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Feb 20 03:28:06 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:28:06 localhost podman[87853]: 2026-02-20 08:28:06.576929818 +0000 UTC m=+0.205504058 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public) Feb 20 03:28:06 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:28:06 localhost podman[87846]: 2026-02-20 08:28:06.602717836 +0000 UTC m=+0.235858008 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step4, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com) Feb 20 03:28:06 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:28:06 localhost podman[87845]: 2026-02-20 08:28:06.663236789 +0000 UTC m=+0.298377954 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2026-01-12T22:34:43Z, distribution-scope=public, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com) Feb 20 03:28:06 localhost podman[87845]: 2026-02-20 08:28:06.671599272 +0000 UTC m=+0.306740447 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, batch=17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, vcs-type=git, release=1766032510) Feb 20 03:28:06 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:28:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:28:09 localhost systemd[1]: tmp-crun.Rj18Oy.mount: Deactivated successfully. Feb 20 03:28:09 localhost podman[87955]: 2026-02-20 08:28:09.451257284 +0000 UTC m=+0.089574199 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, tcib_managed=true, version=17.1.13, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5) Feb 20 03:28:09 localhost podman[87955]: 2026-02-20 08:28:09.817867966 +0000 UTC m=+0.456184881 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, distribution-scope=public, release=1766032510, tcib_managed=true, build-date=2026-01-12T23:32:04Z, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:28:09 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:28:10 localhost sshd[87980]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:28:31 localhost sshd[87982]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:28:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:28:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:28:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:28:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:28:33 localhost systemd[1]: tmp-crun.kPbmkM.mount: Deactivated successfully. Feb 20 03:28:33 localhost podman[87985]: 2026-02-20 08:28:33.459903086 +0000 UTC m=+0.095786584 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:28:33 localhost podman[87984]: 2026-02-20 08:28:33.500040135 +0000 UTC m=+0.137079255 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, config_id=tripleo_step1) Feb 20 03:28:33 localhost podman[87985]: 2026-02-20 08:28:33.513818193 +0000 UTC m=+0.149701661 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:28:33 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:28:33 localhost podman[87991]: 2026-02-20 08:28:33.570837982 +0000 UTC m=+0.197615108 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, distribution-scope=public, tcib_managed=true, vcs-type=git) Feb 20 03:28:33 localhost podman[87991]: 2026-02-20 08:28:33.608799264 +0000 UTC m=+0.235576380 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:28:33 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:28:33 localhost podman[87986]: 2026-02-20 08:28:33.619458519 +0000 UTC m=+0.248836054 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, release=1766032510) Feb 20 03:28:33 localhost podman[87986]: 2026-02-20 08:28:33.706463828 +0000 UTC m=+0.335841383 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.buildah.version=1.41.5, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:28:33 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:28:33 localhost podman[87984]: 2026-02-20 08:28:33.721033296 +0000 UTC m=+0.358072376 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step1, managed_by=tripleo_ansible) Feb 20 03:28:33 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:28:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:28:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:28:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:28:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:28:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:28:37 localhost systemd[1]: tmp-crun.gxEAJe.mount: Deactivated successfully. Feb 20 03:28:37 localhost podman[88088]: 2026-02-20 08:28:37.458847458 +0000 UTC m=+0.091830859 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, container_name=collectd, tcib_managed=true) Feb 20 03:28:37 localhost podman[88085]: 2026-02-20 08:28:37.436070211 +0000 UTC m=+0.070324835 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, version=17.1.13, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z) Feb 20 03:28:37 localhost podman[88086]: 2026-02-20 08:28:37.49870446 +0000 UTC m=+0.134255728 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, release=1766032510, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=) Feb 20 03:28:37 localhost podman[88085]: 2026-02-20 08:28:37.515146109 +0000 UTC m=+0.149400673 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:28:37 localhost podman[88088]: 2026-02-20 08:28:37.521084147 +0000 UTC m=+0.154067548 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, com.redhat.component=openstack-collectd-container, container_name=collectd, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, url=https://www.redhat.com) Feb 20 03:28:37 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:28:37 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:28:37 localhost podman[88086]: 2026-02-20 08:28:37.532733888 +0000 UTC m=+0.168285176 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, container_name=iscsid, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-iscsid-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, version=17.1.13) Feb 20 03:28:37 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:28:37 localhost podman[88089]: 2026-02-20 08:28:37.609750801 +0000 UTC m=+0.240814710 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, tcib_managed=true) Feb 20 03:28:37 localhost podman[88087]: 2026-02-20 08:28:37.658905981 +0000 UTC m=+0.291630254 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, version=17.1.13, distribution-scope=public, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc.) Feb 20 03:28:37 localhost podman[88089]: 2026-02-20 08:28:37.665118827 +0000 UTC m=+0.296182736 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute) Feb 20 03:28:37 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:28:37 localhost podman[88087]: 2026-02-20 08:28:37.715944691 +0000 UTC m=+0.348668934 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:28:37 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:28:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:28:40 localhost systemd[1]: tmp-crun.nHY7cb.mount: Deactivated successfully. Feb 20 03:28:40 localhost podman[88211]: 2026-02-20 08:28:40.034223066 +0000 UTC m=+0.088579542 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64) Feb 20 03:28:40 localhost podman[88211]: 2026-02-20 08:28:40.469228691 +0000 UTC m=+0.523585117 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Feb 20 03:28:40 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:28:54 localhost sshd[88296]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:28:55 localhost sshd[88298]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:29:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:29:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:29:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:29:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:29:04 localhost systemd[1]: tmp-crun.DQsCQL.mount: Deactivated successfully. Feb 20 03:29:04 localhost podman[88347]: 2026-02-20 08:29:04.447165719 +0000 UTC m=+0.083092276 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1766032510, vcs-type=git, version=17.1.13, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64) Feb 20 03:29:04 localhost podman[88346]: 2026-02-20 08:29:04.494573223 +0000 UTC m=+0.130166230 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, release=1766032510, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:29:04 localhost podman[88347]: 2026-02-20 08:29:04.511833483 +0000 UTC m=+0.147760080 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, tcib_managed=true, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:29:04 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:29:04 localhost podman[88346]: 2026-02-20 08:29:04.551770767 +0000 UTC m=+0.187363784 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1766032510, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, vcs-type=git, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 20 03:29:04 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:29:04 localhost podman[88345]: 2026-02-20 08:29:04.606124696 +0000 UTC m=+0.241265021 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, vcs-type=git, tcib_managed=true, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 20 03:29:04 localhost podman[88348]: 2026-02-20 08:29:04.657154996 +0000 UTC m=+0.288804439 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:29:04 localhost podman[88348]: 2026-02-20 08:29:04.710709194 +0000 UTC m=+0.342358597 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, distribution-scope=public, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, architecture=x86_64) Feb 20 03:29:04 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:29:04 localhost podman[88345]: 2026-02-20 08:29:04.823762588 +0000 UTC m=+0.458902873 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_id=tripleo_step1, io.buildah.version=1.41.5, tcib_managed=true) Feb 20 03:29:04 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:29:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:29:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:29:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:29:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:29:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:29:08 localhost systemd[1]: tmp-crun.zJCmfX.mount: Deactivated successfully. Feb 20 03:29:08 localhost podman[88448]: 2026-02-20 08:29:08.467199263 +0000 UTC m=+0.099383409 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd) Feb 20 03:29:08 localhost podman[88446]: 2026-02-20 08:29:08.506168322 +0000 UTC m=+0.143202368 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, release=1766032510, tcib_managed=true, config_id=tripleo_step3, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=) Feb 20 03:29:08 localhost podman[88446]: 2026-02-20 08:29:08.519935579 +0000 UTC m=+0.156969565 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, container_name=iscsid, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:29:08 localhost podman[88445]: 2026-02-20 08:29:08.55034228 +0000 UTC m=+0.189546434 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510) Feb 20 03:29:08 localhost podman[88448]: 2026-02-20 08:29:08.576352073 +0000 UTC m=+0.208536259 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.13, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, distribution-scope=public, build-date=2026-01-12T22:10:15Z, architecture=x86_64, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, batch=17.1_20260112.1, managed_by=tripleo_ansible) Feb 20 03:29:08 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:29:08 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:29:08 localhost podman[88447]: 2026-02-20 08:29:08.651170257 +0000 UTC m=+0.286264271 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, vcs-type=git, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510) Feb 20 03:29:08 localhost podman[88445]: 2026-02-20 08:29:08.675953128 +0000 UTC m=+0.315157282 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2026-01-12T22:56:19Z, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, architecture=x86_64, vendor=Red Hat, Inc.) Feb 20 03:29:08 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:29:08 localhost podman[88447]: 2026-02-20 08:29:08.70192403 +0000 UTC m=+0.337018044 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., tcib_managed=true) Feb 20 03:29:08 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:29:08 localhost podman[88454]: 2026-02-20 08:29:08.766572513 +0000 UTC m=+0.393641903 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, batch=17.1_20260112.1, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, vendor=Red Hat, Inc., container_name=nova_compute, url=https://www.redhat.com) Feb 20 03:29:08 localhost podman[88454]: 2026-02-20 08:29:08.7990887 +0000 UTC m=+0.426158070 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:29:08 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:29:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:29:11 localhost podman[88556]: 2026-02-20 08:29:11.442015828 +0000 UTC m=+0.081507034 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, tcib_managed=true, container_name=nova_migration_target, batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc.) Feb 20 03:29:11 localhost podman[88556]: 2026-02-20 08:29:11.784497916 +0000 UTC m=+0.423989142 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., batch=17.1_20260112.1) Feb 20 03:29:11 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:29:21 localhost sshd[88579]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:29:24 localhost sshd[88581]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:29:30 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:29:30 localhost recover_tripleo_nova_virtqemud[88584]: 63703 Feb 20 03:29:30 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:29:30 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:29:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:29:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:29:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:29:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:29:35 localhost systemd[1]: tmp-crun.qDw0O1.mount: Deactivated successfully. Feb 20 03:29:35 localhost podman[88586]: 2026-02-20 08:29:35.46291001 +0000 UTC m=+0.092155577 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://www.redhat.com) Feb 20 03:29:35 localhost podman[88585]: 2026-02-20 08:29:35.432295184 +0000 UTC m=+0.061594342 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 20 03:29:35 localhost podman[88586]: 2026-02-20 08:29:35.512919464 +0000 UTC m=+0.142164951 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, vcs-type=git, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64) Feb 20 03:29:35 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:29:35 localhost podman[88588]: 2026-02-20 08:29:35.554965954 +0000 UTC m=+0.175456408 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 20 03:29:35 localhost podman[88588]: 2026-02-20 08:29:35.578028209 +0000 UTC m=+0.198518693 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 20 03:29:35 localhost podman[88587]: 2026-02-20 08:29:35.484031783 +0000 UTC m=+0.113116136 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 20 03:29:35 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:29:35 localhost podman[88587]: 2026-02-20 08:29:35.621743903 +0000 UTC m=+0.250827966 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.13, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com) Feb 20 03:29:35 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:29:35 localhost podman[88585]: 2026-02-20 08:29:35.642746324 +0000 UTC m=+0.272045502 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, version=17.1.13, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 20 03:29:35 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:29:36 localhost systemd[1]: tmp-crun.K92RUc.mount: Deactivated successfully. Feb 20 03:29:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:29:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:29:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:29:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:29:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:29:39 localhost systemd[1]: tmp-crun.BYTU6h.mount: Deactivated successfully. Feb 20 03:29:39 localhost systemd[1]: tmp-crun.JQNml4.mount: Deactivated successfully. Feb 20 03:29:39 localhost podman[88685]: 2026-02-20 08:29:39.453246623 +0000 UTC m=+0.079144871 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:29:39 localhost podman[88683]: 2026-02-20 08:29:39.512234765 +0000 UTC m=+0.141004069 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, release=1766032510, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 20 03:29:39 localhost podman[88683]: 2026-02-20 08:29:39.521093541 +0000 UTC m=+0.149862805 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, architecture=x86_64, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, release=1766032510, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:29:39 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:29:39 localhost podman[88682]: 2026-02-20 08:29:39.563287736 +0000 UTC m=+0.193144020 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 20 03:29:39 localhost podman[88682]: 2026-02-20 08:29:39.602628154 +0000 UTC m=+0.232484448 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, release=1766032510, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z) Feb 20 03:29:39 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:29:39 localhost podman[88685]: 2026-02-20 08:29:39.63471374 +0000 UTC m=+0.260611928 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, release=1766032510, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, vcs-type=git, distribution-scope=public) Feb 20 03:29:39 localhost podman[88692]: 2026-02-20 08:29:39.480171371 +0000 UTC m=+0.098104146 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13) Feb 20 03:29:39 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:29:39 localhost podman[88684]: 2026-02-20 08:29:39.607812762 +0000 UTC m=+0.231856760 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_controller, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com) Feb 20 03:29:39 localhost podman[88684]: 2026-02-20 08:29:39.689705776 +0000 UTC m=+0.313749814 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:29:39 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:29:39 localhost podman[88692]: 2026-02-20 08:29:39.716822888 +0000 UTC m=+0.334755673 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:29:39 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:29:40 localhost systemd[1]: tmp-crun.8rulTQ.mount: Deactivated successfully. Feb 20 03:29:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:29:42 localhost podman[88852]: 2026-02-20 08:29:42.443281232 +0000 UTC m=+0.082127769 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, container_name=nova_migration_target, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, tcib_managed=true, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:29:42 localhost podman[88852]: 2026-02-20 08:29:42.839767541 +0000 UTC m=+0.478614038 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., batch=17.1_20260112.1, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, container_name=nova_migration_target, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13) Feb 20 03:29:42 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:29:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 03:29:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5073 writes, 22K keys, 5073 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5073 writes, 653 syncs, 7.77 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 507 writes, 1932 keys, 507 commit groups, 1.0 writes per commit group, ingest: 2.64 MB, 0.00 MB/s#012Interval WAL: 507 writes, 180 syncs, 2.82 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 03:29:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 03:29:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5513 writes, 24K keys, 5513 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5513 writes, 750 syncs, 7.35 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 504 writes, 1926 keys, 504 commit groups, 1.0 writes per commit group, ingest: 2.17 MB, 0.00 MB/s#012Interval WAL: 504 writes, 184 syncs, 2.74 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 03:29:55 localhost sshd[88890]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:30:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:30:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:30:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:30:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:30:06 localhost podman[88937]: 2026-02-20 08:30:06.449420559 +0000 UTC m=+0.081203775 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, version=17.1.13, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public) Feb 20 03:30:06 localhost systemd[1]: tmp-crun.gfBjaI.mount: Deactivated successfully. Feb 20 03:30:06 localhost podman[88938]: 2026-02-20 08:30:06.505654568 +0000 UTC m=+0.136642833 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 20 03:30:06 localhost podman[88939]: 2026-02-20 08:30:06.556567785 +0000 UTC m=+0.188512565 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, container_name=logrotate_crond, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64) Feb 20 03:30:06 localhost podman[88939]: 2026-02-20 08:30:06.596621123 +0000 UTC m=+0.228565833 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, release=1766032510, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:30:06 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:30:06 localhost podman[88940]: 2026-02-20 08:30:06.609029054 +0000 UTC m=+0.236610888 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, io.openshift.expose-services=) Feb 20 03:30:06 localhost podman[88938]: 2026-02-20 08:30:06.63437491 +0000 UTC m=+0.265363205 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public) Feb 20 03:30:06 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:30:06 localhost podman[88940]: 2026-02-20 08:30:06.692735845 +0000 UTC m=+0.320317719 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible) Feb 20 03:30:06 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:30:06 localhost podman[88937]: 2026-02-20 08:30:06.746792626 +0000 UTC m=+0.378575872 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, io.buildah.version=1.41.5, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc.) Feb 20 03:30:06 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:30:07 localhost systemd[1]: tmp-crun.5PTj5D.mount: Deactivated successfully. Feb 20 03:30:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:30:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:30:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:30:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:30:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:30:10 localhost podman[89030]: 2026-02-20 08:30:10.452106293 +0000 UTC m=+0.089673371 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, batch=17.1_20260112.1, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:30:10 localhost podman[89032]: 2026-02-20 08:30:10.504349005 +0000 UTC m=+0.137261190 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:30:10 localhost systemd[1]: tmp-crun.YRTVWk.mount: Deactivated successfully. Feb 20 03:30:10 localhost podman[89031]: 2026-02-20 08:30:10.55479888 +0000 UTC m=+0.190279723 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:30:10 localhost podman[89032]: 2026-02-20 08:30:10.561675104 +0000 UTC m=+0.194587229 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, maintainer=OpenStack TripleO Team) Feb 20 03:30:10 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:30:10 localhost podman[89039]: 2026-02-20 08:30:10.600278423 +0000 UTC m=+0.227554508 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, version=17.1.13, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 20 03:30:10 localhost podman[89031]: 2026-02-20 08:30:10.617205534 +0000 UTC m=+0.252686297 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.) Feb 20 03:30:10 localhost podman[89030]: 2026-02-20 08:30:10.626184593 +0000 UTC m=+0.263751671 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 20 03:30:10 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:30:10 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:30:10 localhost podman[89039]: 2026-02-20 08:30:10.65873361 +0000 UTC m=+0.286009675 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 20 03:30:10 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:30:10 localhost podman[89033]: 2026-02-20 08:30:10.710540922 +0000 UTC m=+0.340238720 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd) Feb 20 03:30:10 localhost podman[89033]: 2026-02-20 08:30:10.723692572 +0000 UTC m=+0.353390360 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd) Feb 20 03:30:10 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:30:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:30:13 localhost podman[89144]: 2026-02-20 08:30:13.443965882 +0000 UTC m=+0.082442539 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, version=17.1.13, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, managed_by=tripleo_ansible) Feb 20 03:30:13 localhost podman[89144]: 2026-02-20 08:30:13.841362124 +0000 UTC m=+0.479838831 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container) Feb 20 03:30:13 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:30:14 localhost sshd[89166]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:30:32 localhost sshd[89168]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:30:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:30:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:30:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:30:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:30:37 localhost systemd[1]: tmp-crun.kipVHV.mount: Deactivated successfully. Feb 20 03:30:37 localhost podman[89170]: 2026-02-20 08:30:37.4479383 +0000 UTC m=+0.088120090 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1) Feb 20 03:30:37 localhost podman[89171]: 2026-02-20 08:30:37.500556442 +0000 UTC m=+0.138436130 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, release=1766032510, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, vendor=Red Hat, Inc.) Feb 20 03:30:37 localhost podman[89172]: 2026-02-20 08:30:37.555952109 +0000 UTC m=+0.191069903 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, distribution-scope=public, build-date=2026-01-12T22:10:15Z) Feb 20 03:30:37 localhost podman[89172]: 2026-02-20 08:30:37.566759267 +0000 UTC m=+0.201877011 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, release=1766032510) Feb 20 03:30:37 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:30:37 localhost podman[89171]: 2026-02-20 08:30:37.608173711 +0000 UTC m=+0.246053429 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:30:37 localhost podman[89173]: 2026-02-20 08:30:37.619962755 +0000 UTC m=+0.251488814 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-type=git) Feb 20 03:30:37 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:30:37 localhost podman[89170]: 2026-02-20 08:30:37.662815347 +0000 UTC m=+0.302997117 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public) Feb 20 03:30:37 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:30:37 localhost podman[89173]: 2026-02-20 08:30:37.719799416 +0000 UTC m=+0.351325465 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, url=https://www.redhat.com, io.openshift.expose-services=, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 20 03:30:37 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:30:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:30:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:30:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:30:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:30:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:30:41 localhost podman[89272]: 2026-02-20 08:30:41.430587307 +0000 UTC m=+0.065291771 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., version=17.1.13, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 20 03:30:41 localhost podman[89279]: 2026-02-20 08:30:41.464296686 +0000 UTC m=+0.088225012 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, batch=17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, container_name=collectd, version=17.1.13, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:30:41 localhost podman[89272]: 2026-02-20 08:30:41.515630614 +0000 UTC m=+0.150335088 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:30:41 localhost podman[89279]: 2026-02-20 08:30:41.522433276 +0000 UTC m=+0.146361652 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, build-date=2026-01-12T22:10:15Z, release=1766032510, version=17.1.13, architecture=x86_64) Feb 20 03:30:41 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:30:41 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:30:41 localhost podman[89274]: 2026-02-20 08:30:41.495724723 +0000 UTC m=+0.122380212 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 20 03:30:41 localhost podman[89273]: 2026-02-20 08:30:41.498896658 +0000 UTC m=+0.131920107 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, container_name=iscsid, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 20 03:30:41 localhost podman[89274]: 2026-02-20 08:30:41.574469882 +0000 UTC m=+0.201125411 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, release=1766032510, architecture=x86_64) Feb 20 03:30:41 localhost podman[89273]: 2026-02-20 08:30:41.581936862 +0000 UTC m=+0.214960321 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, version=17.1.13, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 20 03:30:41 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:30:41 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:30:41 localhost podman[89285]: 2026-02-20 08:30:41.523177755 +0000 UTC m=+0.142262253 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, vcs-type=git, config_id=tripleo_step5, container_name=nova_compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:30:41 localhost podman[89285]: 2026-02-20 08:30:41.653448278 +0000 UTC m=+0.272532786 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_compute, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5) Feb 20 03:30:41 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:30:42 localhost systemd[1]: tmp-crun.nX2CFL.mount: Deactivated successfully. Feb 20 03:30:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:30:44 localhost podman[89384]: 2026-02-20 08:30:44.437094377 +0000 UTC m=+0.076120500 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:30:44 localhost podman[89384]: 2026-02-20 08:30:44.837056278 +0000 UTC m=+0.476082411 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Feb 20 03:30:44 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:31:03 localhost sshd[89577]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:31:04 localhost sshd[89579]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:31:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:31:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:31:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:31:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:31:08 localhost systemd[1]: tmp-crun.ISjnmP.mount: Deactivated successfully. Feb 20 03:31:08 localhost podman[89583]: 2026-02-20 08:31:08.464561533 +0000 UTC m=+0.096279247 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team) Feb 20 03:31:08 localhost podman[89583]: 2026-02-20 08:31:08.503791958 +0000 UTC m=+0.135509702 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1) Feb 20 03:31:08 localhost systemd[1]: tmp-crun.1b715U.mount: Deactivated successfully. Feb 20 03:31:08 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:31:08 localhost podman[89581]: 2026-02-20 08:31:08.518638005 +0000 UTC m=+0.153542804 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., config_id=tripleo_step1, release=1766032510, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true) Feb 20 03:31:08 localhost podman[89582]: 2026-02-20 08:31:08.558844006 +0000 UTC m=+0.191742452 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5) Feb 20 03:31:08 localhost podman[89584]: 2026-02-20 08:31:08.615881936 +0000 UTC m=+0.243095841 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 20 03:31:08 localhost podman[89582]: 2026-02-20 08:31:08.619705779 +0000 UTC m=+0.252604235 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:31:08 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:31:08 localhost podman[89584]: 2026-02-20 08:31:08.645834735 +0000 UTC m=+0.273048660 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 20 03:31:08 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:31:08 localhost podman[89581]: 2026-02-20 08:31:08.706769759 +0000 UTC m=+0.341674508 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20260112.1, version=17.1.13, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, vcs-type=git, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:31:08 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:31:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:31:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:31:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:31:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:31:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:31:12 localhost systemd[1]: tmp-crun.D4gW6e.mount: Deactivated successfully. Feb 20 03:31:12 localhost podman[89687]: 2026-02-20 08:31:12.461964484 +0000 UTC m=+0.094078308 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, architecture=x86_64, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5) Feb 20 03:31:12 localhost systemd[1]: tmp-crun.9v4klO.mount: Deactivated successfully. Feb 20 03:31:12 localhost podman[89686]: 2026-02-20 08:31:12.50870164 +0000 UTC m=+0.140851125 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:31:12 localhost podman[89687]: 2026-02-20 08:31:12.526640118 +0000 UTC m=+0.158753952 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, architecture=x86_64, batch=17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, container_name=collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team) Feb 20 03:31:12 localhost podman[89686]: 2026-02-20 08:31:12.534770475 +0000 UTC m=+0.166919960 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20260112.1, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, release=1766032510, vcs-type=git, container_name=ovn_controller, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:31:12 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:31:12 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:31:12 localhost podman[89684]: 2026-02-20 08:31:12.611161511 +0000 UTC m=+0.248558396 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, tcib_managed=true) Feb 20 03:31:12 localhost podman[89688]: 2026-02-20 08:31:12.664272837 +0000 UTC m=+0.293291558 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, config_id=tripleo_step5, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Feb 20 03:31:12 localhost podman[89685]: 2026-02-20 08:31:12.718040971 +0000 UTC m=+0.355076916 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-iscsid-container, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-type=git, url=https://www.redhat.com) Feb 20 03:31:12 localhost podman[89684]: 2026-02-20 08:31:12.737424116 +0000 UTC m=+0.374821031 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:31:12 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:31:12 localhost podman[89685]: 2026-02-20 08:31:12.755813407 +0000 UTC m=+0.392849342 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, tcib_managed=true, version=17.1.13, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:31:12 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:31:12 localhost podman[89688]: 2026-02-20 08:31:12.792470844 +0000 UTC m=+0.421489585 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, version=17.1.13, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, vcs-type=git, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible) Feb 20 03:31:12 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:31:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:31:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:31:15 localhost recover_tripleo_nova_virtqemud[89794]: 63703 Feb 20 03:31:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:31:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:31:15 localhost podman[89792]: 2026-02-20 08:31:15.440276973 +0000 UTC m=+0.082610043 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, tcib_managed=true, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:31:15 localhost podman[89792]: 2026-02-20 08:31:15.813995445 +0000 UTC m=+0.456328465 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, release=1766032510, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:31:15 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:31:28 localhost sshd[89817]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:31:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:31:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:31:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:31:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:31:39 localhost systemd[1]: tmp-crun.UyEKvy.mount: Deactivated successfully. Feb 20 03:31:39 localhost podman[89819]: 2026-02-20 08:31:39.471766106 +0000 UTC m=+0.106073509 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, config_id=tripleo_step1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:31:39 localhost podman[89820]: 2026-02-20 08:31:39.514097874 +0000 UTC m=+0.145210332 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, version=17.1.13, build-date=2026-01-12T23:07:47Z, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:31:39 localhost podman[89820]: 2026-02-20 08:31:39.541592467 +0000 UTC m=+0.172704975 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:31:39 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:31:39 localhost podman[89821]: 2026-02-20 08:31:39.573761134 +0000 UTC m=+0.201634555 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, release=1766032510, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:31:39 localhost podman[89822]: 2026-02-20 08:31:39.605064048 +0000 UTC m=+0.230297369 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13) Feb 20 03:31:39 localhost podman[89821]: 2026-02-20 08:31:39.65762973 +0000 UTC m=+0.285503171 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1766032510, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron) Feb 20 03:31:39 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:31:39 localhost podman[89819]: 2026-02-20 08:31:39.698235433 +0000 UTC m=+0.332542786 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., vcs-type=git, version=17.1.13, release=1766032510) Feb 20 03:31:39 localhost podman[89822]: 2026-02-20 08:31:39.709597515 +0000 UTC m=+0.334830896 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, version=17.1.13, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container) Feb 20 03:31:39 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:31:39 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:31:40 localhost systemd[1]: tmp-crun.wecJhP.mount: Deactivated successfully. Feb 20 03:31:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:31:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:31:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:31:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:31:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:31:43 localhost podman[89924]: 2026-02-20 08:31:43.459560571 +0000 UTC m=+0.080270910 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:31:43 localhost systemd[1]: tmp-crun.qdVr6V.mount: Deactivated successfully. Feb 20 03:31:43 localhost podman[89921]: 2026-02-20 08:31:43.515477201 +0000 UTC m=+0.151627262 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4) Feb 20 03:31:43 localhost podman[89924]: 2026-02-20 08:31:43.552611951 +0000 UTC m=+0.173322180 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.13, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, container_name=collectd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:31:43 localhost sshd[90003]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:31:43 localhost podman[89923]: 2026-02-20 08:31:43.563556382 +0000 UTC m=+0.195936633 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Feb 20 03:31:43 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:31:43 localhost podman[89935]: 2026-02-20 08:31:43.534745765 +0000 UTC m=+0.154647983 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, vcs-type=git) Feb 20 03:31:43 localhost podman[89921]: 2026-02-20 08:31:43.607689909 +0000 UTC m=+0.243839910 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:31:43 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:31:43 localhost podman[89923]: 2026-02-20 08:31:43.640558706 +0000 UTC m=+0.272938967 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible) Feb 20 03:31:43 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:31:43 localhost podman[89922]: 2026-02-20 08:31:43.612099597 +0000 UTC m=+0.247458538 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., release=1766032510, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid) Feb 20 03:31:43 localhost podman[89922]: 2026-02-20 08:31:43.69136217 +0000 UTC m=+0.326721091 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, architecture=x86_64) Feb 20 03:31:43 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:31:43 localhost podman[89935]: 2026-02-20 08:31:43.714886846 +0000 UTC m=+0.334789064 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com) Feb 20 03:31:43 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:31:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:31:46 localhost podman[90032]: 2026-02-20 08:31:46.441544156 +0000 UTC m=+0.080369983 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, tcib_managed=true) Feb 20 03:31:46 localhost podman[90032]: 2026-02-20 08:31:46.810454649 +0000 UTC m=+0.449280506 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, container_name=nova_migration_target, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Feb 20 03:31:46 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:32:02 localhost sshd[90154]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:32:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:32:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:32:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:32:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:32:10 localhost podman[90157]: 2026-02-20 08:32:10.457028167 +0000 UTC m=+0.088046167 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.13, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.buildah.version=1.41.5) Feb 20 03:32:10 localhost podman[90156]: 2026-02-20 08:32:10.511958372 +0000 UTC m=+0.144510414 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, distribution-scope=public, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:32:10 localhost podman[90157]: 2026-02-20 08:32:10.512827194 +0000 UTC m=+0.143845204 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, distribution-scope=public, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, architecture=x86_64, io.buildah.version=1.41.5) Feb 20 03:32:10 localhost podman[90158]: 2026-02-20 08:32:10.561662047 +0000 UTC m=+0.190504799 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, version=17.1.13, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:32:10 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:32:10 localhost podman[90159]: 2026-02-20 08:32:10.611228787 +0000 UTC m=+0.234843410 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64) Feb 20 03:32:10 localhost podman[90159]: 2026-02-20 08:32:10.642793259 +0000 UTC m=+0.266407892 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, config_id=tripleo_step4) Feb 20 03:32:10 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:32:10 localhost podman[90158]: 2026-02-20 08:32:10.69723514 +0000 UTC m=+0.326078342 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, container_name=logrotate_crond, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:32:10 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:32:10 localhost podman[90156]: 2026-02-20 08:32:10.729816709 +0000 UTC m=+0.362368741 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, tcib_managed=true, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:32:10 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:32:13 localhost sshd[90255]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:32:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:32:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:32:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:32:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:32:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:32:14 localhost systemd[1]: tmp-crun.4r1nfd.mount: Deactivated successfully. Feb 20 03:32:14 localhost podman[90259]: 2026-02-20 08:32:14.448362056 +0000 UTC m=+0.078999906 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, batch=17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, container_name=ovn_controller, managed_by=tripleo_ansible) Feb 20 03:32:14 localhost podman[90259]: 2026-02-20 08:32:14.497662841 +0000 UTC m=+0.128300711 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, release=1766032510, managed_by=tripleo_ansible, io.buildah.version=1.41.5) Feb 20 03:32:14 localhost podman[90272]: 2026-02-20 08:32:14.505133749 +0000 UTC m=+0.132306867 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:32:14 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:32:14 localhost podman[90265]: 2026-02-20 08:32:14.513837622 +0000 UTC m=+0.138324718 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., container_name=collectd, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=) Feb 20 03:32:14 localhost podman[90265]: 2026-02-20 08:32:14.522709279 +0000 UTC m=+0.147196345 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:32:14 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:32:14 localhost podman[90258]: 2026-02-20 08:32:14.594335687 +0000 UTC m=+0.235033226 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, distribution-scope=public, container_name=iscsid, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid) Feb 20 03:32:14 localhost podman[90272]: 2026-02-20 08:32:14.626327441 +0000 UTC m=+0.253500599 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:32:14 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:32:14 localhost podman[90258]: 2026-02-20 08:32:14.678419679 +0000 UTC m=+0.319117228 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.5) Feb 20 03:32:14 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:32:14 localhost podman[90257]: 2026-02-20 08:32:14.754686842 +0000 UTC m=+0.391116916 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, release=1766032510) Feb 20 03:32:14 localhost podman[90257]: 2026-02-20 08:32:14.797790441 +0000 UTC m=+0.434220565 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, batch=17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 20 03:32:14 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:32:15 localhost systemd[1]: tmp-crun.3rZWeC.mount: Deactivated successfully. Feb 20 03:32:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:32:17 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:32:17 localhost recover_tripleo_nova_virtqemud[90376]: 63703 Feb 20 03:32:17 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:32:17 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:32:17 localhost systemd[1]: tmp-crun.puy2FS.mount: Deactivated successfully. Feb 20 03:32:17 localhost podman[90369]: 2026-02-20 08:32:17.43890023 +0000 UTC m=+0.082238954 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, architecture=x86_64, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:32:17 localhost podman[90369]: 2026-02-20 08:32:17.8347277 +0000 UTC m=+0.478066384 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, vcs-type=git, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:32:17 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:32:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:32:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:32:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:32:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:32:41 localhost systemd[1]: tmp-crun.XF4c5l.mount: Deactivated successfully. Feb 20 03:32:41 localhost podman[90394]: 2026-02-20 08:32:41.512890192 +0000 UTC m=+0.141582315 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:32:41 localhost podman[90393]: 2026-02-20 08:32:41.464452091 +0000 UTC m=+0.097879870 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, container_name=metrics_qdr, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:32:41 localhost podman[90394]: 2026-02-20 08:32:41.542693796 +0000 UTC m=+0.171385889 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.5, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13) Feb 20 03:32:41 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:32:41 localhost podman[90395]: 2026-02-20 08:32:41.568915595 +0000 UTC m=+0.195379178 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container) Feb 20 03:32:41 localhost podman[90397]: 2026-02-20 08:32:41.597091916 +0000 UTC m=+0.224544196 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, version=17.1.13, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1) Feb 20 03:32:41 localhost podman[90397]: 2026-02-20 08:32:41.624539098 +0000 UTC m=+0.251991328 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., architecture=x86_64, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:32:41 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:32:41 localhost podman[90393]: 2026-02-20 08:32:41.662796068 +0000 UTC m=+0.296223847 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step1, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:32:41 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:32:41 localhost podman[90395]: 2026-02-20 08:32:41.677815888 +0000 UTC m=+0.304279501 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, release=1766032510, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible) Feb 20 03:32:41 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:32:42 localhost systemd[1]: tmp-crun.xHntVO.mount: Deactivated successfully. Feb 20 03:32:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:32:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:32:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:32:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:32:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:32:45 localhost systemd[1]: tmp-crun.achYYH.mount: Deactivated successfully. Feb 20 03:32:45 localhost podman[90493]: 2026-02-20 08:32:45.438419309 +0000 UTC m=+0.079398438 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 20 03:32:45 localhost systemd[1]: tmp-crun.PPbupk.mount: Deactivated successfully. Feb 20 03:32:45 localhost podman[90494]: 2026-02-20 08:32:45.451779614 +0000 UTC m=+0.085908860 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, build-date=2026-01-12T22:36:40Z, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:32:45 localhost podman[90492]: 2026-02-20 08:32:45.484008103 +0000 UTC m=+0.124594332 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:32:45 localhost podman[90505]: 2026-02-20 08:32:45.495309145 +0000 UTC m=+0.124710606 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, version=17.1.13, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:32:45 localhost podman[90494]: 2026-02-20 08:32:45.524309607 +0000 UTC m=+0.158438893 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:32:45 localhost podman[90505]: 2026-02-20 08:32:45.533809351 +0000 UTC m=+0.163210752 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, release=1766032510, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1) Feb 20 03:32:45 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:32:45 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:32:45 localhost podman[90493]: 2026-02-20 08:32:45.575097321 +0000 UTC m=+0.216076520 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, release=1766032510, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vcs-type=git, build-date=2026-01-12T22:34:43Z, container_name=iscsid, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 20 03:32:45 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:32:45 localhost podman[90492]: 2026-02-20 08:32:45.587700367 +0000 UTC m=+0.228286606 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 20 03:32:45 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:32:45 localhost podman[90506]: 2026-02-20 08:32:45.661271398 +0000 UTC m=+0.289001564 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.5, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:32:45 localhost podman[90506]: 2026-02-20 08:32:45.688785102 +0000 UTC m=+0.316515258 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, container_name=nova_compute, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:32:45 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:32:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:32:48 localhost systemd[1]: tmp-crun.25TopV.mount: Deactivated successfully. Feb 20 03:32:48 localhost podman[90599]: 2026-02-20 08:32:48.446079258 +0000 UTC m=+0.085219302 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:32:48 localhost podman[90599]: 2026-02-20 08:32:48.851474934 +0000 UTC m=+0.490614988 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, container_name=nova_migration_target, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4) Feb 20 03:32:48 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:32:50 localhost podman[90723]: 2026-02-20 08:32:50.264587731 +0000 UTC m=+0.090756700 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.42.2, version=7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347) Feb 20 03:32:50 localhost podman[90723]: 2026-02-20 08:32:50.360685643 +0000 UTC m=+0.186854612 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, description=Red Hat Ceph Storage 7, release=1770267347, vcs-type=git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.42.2, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Feb 20 03:32:53 localhost sshd[90863]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:33:00 localhost sshd[90887]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:33:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:33:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:33:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:33:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:33:12 localhost podman[90891]: 2026-02-20 08:33:12.455686807 +0000 UTC m=+0.089102736 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 20 03:33:12 localhost podman[90891]: 2026-02-20 08:33:12.486742435 +0000 UTC m=+0.120158294 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:33:12 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:33:12 localhost podman[90892]: 2026-02-20 08:33:12.503132742 +0000 UTC m=+0.134949598 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.expose-services=, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, build-date=2026-01-12T22:10:15Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:33:12 localhost systemd[1]: tmp-crun.7Br4vb.mount: Deactivated successfully. Feb 20 03:33:12 localhost podman[90892]: 2026-02-20 08:33:12.544765722 +0000 UTC m=+0.176582608 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, container_name=logrotate_crond, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 20 03:33:12 localhost podman[90890]: 2026-02-20 08:33:12.55222403 +0000 UTC m=+0.186257275 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, release=1766032510, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:33:12 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:33:12 localhost podman[90893]: 2026-02-20 08:33:12.614022767 +0000 UTC m=+0.241566339 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Feb 20 03:33:12 localhost podman[90893]: 2026-02-20 08:33:12.669026984 +0000 UTC m=+0.296570536 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.13, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:33:12 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:33:12 localhost podman[90890]: 2026-02-20 08:33:12.752621402 +0000 UTC m=+0.386654667 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr) Feb 20 03:33:12 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:33:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:33:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:33:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:33:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:33:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:33:16 localhost podman[90991]: 2026-02-20 08:33:16.443101532 +0000 UTC m=+0.083558738 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vendor=Red Hat, Inc.) Feb 20 03:33:16 localhost podman[91000]: 2026-02-20 08:33:16.46661644 +0000 UTC m=+0.095775645 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_id=tripleo_step5, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, tcib_managed=true, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, version=17.1.13) Feb 20 03:33:16 localhost podman[90992]: 2026-02-20 08:33:16.50453661 +0000 UTC m=+0.142437908 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, build-date=2026-01-12T22:34:43Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, container_name=iscsid, batch=17.1_20260112.1, vcs-type=git) Feb 20 03:33:16 localhost podman[90992]: 2026-02-20 08:33:16.539016969 +0000 UTC m=+0.176918217 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, batch=17.1_20260112.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=) Feb 20 03:33:16 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:33:16 localhost podman[91000]: 2026-02-20 08:33:16.553259909 +0000 UTC m=+0.182419154 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1766032510, container_name=nova_compute, version=17.1.13, config_id=tripleo_step5, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc.) Feb 20 03:33:16 localhost podman[90993]: 2026-02-20 08:33:16.553557387 +0000 UTC m=+0.188336902 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, container_name=ovn_controller, vcs-type=git, batch=17.1_20260112.1, release=1766032510, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public) Feb 20 03:33:16 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:33:16 localhost podman[90991]: 2026-02-20 08:33:16.628656878 +0000 UTC m=+0.269114134 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 20 03:33:16 localhost podman[90993]: 2026-02-20 08:33:16.637775342 +0000 UTC m=+0.272554837 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, vcs-type=git, batch=17.1_20260112.1, version=17.1.13, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 20 03:33:16 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:33:16 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:33:16 localhost podman[90994]: 2026-02-20 08:33:16.606494198 +0000 UTC m=+0.239035153 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vcs-type=git, batch=17.1_20260112.1) Feb 20 03:33:16 localhost podman[90994]: 2026-02-20 08:33:16.689809248 +0000 UTC m=+0.322350223 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git) Feb 20 03:33:16 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:33:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:33:19 localhost podman[91102]: 2026-02-20 08:33:19.437267642 +0000 UTC m=+0.081396670 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, url=https://www.redhat.com, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:33:19 localhost podman[91102]: 2026-02-20 08:33:19.787916089 +0000 UTC m=+0.432045087 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 20 03:33:19 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:33:42 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:33:42 localhost recover_tripleo_nova_virtqemud[91125]: 63703 Feb 20 03:33:42 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:33:42 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:33:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:33:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:33:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:33:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:33:43 localhost systemd[1]: tmp-crun.nAPEMQ.mount: Deactivated successfully. Feb 20 03:33:43 localhost podman[91127]: 2026-02-20 08:33:43.45844338 +0000 UTC m=+0.093255366 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64) Feb 20 03:33:43 localhost podman[91127]: 2026-02-20 08:33:43.488728848 +0000 UTC m=+0.123540874 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=) Feb 20 03:33:43 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:33:43 localhost podman[91126]: 2026-02-20 08:33:43.542759837 +0000 UTC m=+0.179971147 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.13, build-date=2026-01-12T22:10:14Z, vcs-type=git, release=1766032510, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:33:43 localhost podman[91128]: 2026-02-20 08:33:43.49897025 +0000 UTC m=+0.129389150 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.13, release=1766032510, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:33:43 localhost podman[91128]: 2026-02-20 08:33:43.581975523 +0000 UTC m=+0.212394453 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.openshift.expose-services=, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, tcib_managed=true) Feb 20 03:33:43 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:33:43 localhost podman[91129]: 2026-02-20 08:33:43.621065665 +0000 UTC m=+0.248692310 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com) Feb 20 03:33:43 localhost podman[91129]: 2026-02-20 08:33:43.676757629 +0000 UTC m=+0.304384264 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 20 03:33:43 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:33:43 localhost podman[91126]: 2026-02-20 08:33:43.741749571 +0000 UTC m=+0.378960931 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:33:43 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:33:46 localhost sshd[91229]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:33:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:33:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:33:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:33:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:33:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:33:47 localhost podman[91246]: 2026-02-20 08:33:47.467606095 +0000 UTC m=+0.091387797 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=nova_compute, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 20 03:33:47 localhost podman[91231]: 2026-02-20 08:33:47.500563553 +0000 UTC m=+0.138119792 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.13, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.buildah.version=1.41.5, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 20 03:33:47 localhost podman[91232]: 2026-02-20 08:33:47.550959307 +0000 UTC m=+0.188793494 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13) Feb 20 03:33:47 localhost podman[91232]: 2026-02-20 08:33:47.587698406 +0000 UTC m=+0.225532603 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, container_name=iscsid, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc.) Feb 20 03:33:47 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:33:47 localhost podman[91233]: 2026-02-20 08:33:47.607302169 +0000 UTC m=+0.240166123 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64) Feb 20 03:33:47 localhost podman[91246]: 2026-02-20 08:33:47.624038625 +0000 UTC m=+0.247820357 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, tcib_managed=true, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13) Feb 20 03:33:47 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:33:47 localhost podman[91233]: 2026-02-20 08:33:47.660822895 +0000 UTC m=+0.293686819 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z) Feb 20 03:33:47 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:33:47 localhost podman[91231]: 2026-02-20 08:33:47.67789971 +0000 UTC m=+0.315455979 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Feb 20 03:33:47 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:33:47 localhost podman[91234]: 2026-02-20 08:33:47.762627998 +0000 UTC m=+0.390539710 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.5, release=1766032510, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=collectd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:33:47 localhost podman[91234]: 2026-02-20 08:33:47.771731731 +0000 UTC m=+0.399643403 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, release=1766032510, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Feb 20 03:33:47 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:33:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:33:50 localhost systemd[1]: tmp-crun.UfG5bn.mount: Deactivated successfully. Feb 20 03:33:50 localhost podman[91335]: 2026-02-20 08:33:50.445414778 +0000 UTC m=+0.085641515 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container) Feb 20 03:33:50 localhost podman[91335]: 2026-02-20 08:33:50.822005225 +0000 UTC m=+0.462231952 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=) Feb 20 03:33:50 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:34:02 localhost sshd[91458]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:34:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:34:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:34:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:34:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:34:14 localhost systemd[1]: tmp-crun.UMSXuO.mount: Deactivated successfully. Feb 20 03:34:14 localhost podman[91461]: 2026-02-20 08:34:14.471752613 +0000 UTC m=+0.102150214 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, batch=17.1_20260112.1, release=1766032510, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 20 03:34:14 localhost podman[91461]: 2026-02-20 08:34:14.499633036 +0000 UTC m=+0.130030637 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:34:14 localhost podman[91460]: 2026-02-20 08:34:14.513147366 +0000 UTC m=+0.143659180 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:34:14 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:34:14 localhost podman[91463]: 2026-02-20 08:34:14.552883585 +0000 UTC m=+0.178518939 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible) Feb 20 03:34:14 localhost podman[91462]: 2026-02-20 08:34:14.614911689 +0000 UTC m=+0.242672820 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, tcib_managed=true, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, distribution-scope=public, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, release=1766032510, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:34:14 localhost podman[91462]: 2026-02-20 08:34:14.627910555 +0000 UTC m=+0.255671646 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vcs-type=git, com.redhat.component=openstack-cron-container) Feb 20 03:34:14 localhost podman[91463]: 2026-02-20 08:34:14.637889601 +0000 UTC m=+0.263524985 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, release=1766032510, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 20 03:34:14 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:34:14 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:34:14 localhost podman[91460]: 2026-02-20 08:34:14.702612247 +0000 UTC m=+0.333123991 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, io.openshift.expose-services=) Feb 20 03:34:14 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:34:17 localhost sshd[91559]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:34:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:34:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:34:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:34:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:34:17 localhost podman[91562]: 2026-02-20 08:34:17.78692972 +0000 UTC m=+0.086433936 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, io.openshift.expose-services=) Feb 20 03:34:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:34:17 localhost podman[91562]: 2026-02-20 08:34:17.842731687 +0000 UTC m=+0.142235863 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1766032510, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:34:17 localhost systemd[1]: tmp-crun.hE8ALG.mount: Deactivated successfully. Feb 20 03:34:17 localhost podman[91561]: 2026-02-20 08:34:17.852282171 +0000 UTC m=+0.152393562 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, release=1766032510, build-date=2026-01-12T22:34:43Z, distribution-scope=public, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Feb 20 03:34:17 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:34:17 localhost podman[91561]: 2026-02-20 08:34:17.937690618 +0000 UTC m=+0.237802049 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, release=1766032510, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, batch=17.1_20260112.1, container_name=iscsid, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 20 03:34:17 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:34:17 localhost podman[91574]: 2026-02-20 08:34:17.951213379 +0000 UTC m=+0.235961321 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, version=17.1.13, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 20 03:34:17 localhost podman[91563]: 2026-02-20 08:34:17.90590544 +0000 UTC m=+0.196626872 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, vendor=Red Hat, Inc., release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:34:17 localhost podman[91617]: 2026-02-20 08:34:17.929557571 +0000 UTC m=+0.092683301 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, vcs-type=git, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, version=17.1.13) Feb 20 03:34:17 localhost podman[91563]: 2026-02-20 08:34:17.987929867 +0000 UTC m=+0.278651359 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller) Feb 20 03:34:17 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:34:18 localhost podman[91617]: 2026-02-20 08:34:18.013762256 +0000 UTC m=+0.176887986 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, architecture=x86_64) Feb 20 03:34:18 localhost podman[91574]: 2026-02-20 08:34:18.02479656 +0000 UTC m=+0.309544592 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64) Feb 20 03:34:18 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:34:18 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:34:20 localhost sshd[91674]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:34:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:34:21 localhost podman[91676]: 2026-02-20 08:34:21.439560011 +0000 UTC m=+0.080482206 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.13, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:34:21 localhost podman[91676]: 2026-02-20 08:34:21.778693561 +0000 UTC m=+0.419615776 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.5, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:34:21 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:34:31 localhost sshd[91699]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:34:33 localhost sshd[91701]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:34:35 localhost sshd[91703]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:34:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:34:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:34:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:34:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:34:45 localhost systemd[1]: tmp-crun.g7IQNr.mount: Deactivated successfully. Feb 20 03:34:45 localhost podman[91705]: 2026-02-20 08:34:45.460461072 +0000 UTC m=+0.097246912 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, vcs-type=git, release=1766032510) Feb 20 03:34:45 localhost podman[91706]: 2026-02-20 08:34:45.525555598 +0000 UTC m=+0.153764990 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:34:45 localhost podman[91707]: 2026-02-20 08:34:45.575654923 +0000 UTC m=+0.201792459 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, container_name=logrotate_crond, io.buildah.version=1.41.5, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1) Feb 20 03:34:45 localhost podman[91707]: 2026-02-20 08:34:45.583471931 +0000 UTC m=+0.209609447 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:34:45 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:34:45 localhost podman[91706]: 2026-02-20 08:34:45.604186654 +0000 UTC m=+0.232396086 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public) Feb 20 03:34:45 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:34:45 localhost podman[91705]: 2026-02-20 08:34:45.657821943 +0000 UTC m=+0.294607813 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:34:45 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:34:45 localhost podman[91713]: 2026-02-20 08:34:45.659634722 +0000 UTC m=+0.283746865 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, release=1766032510, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Feb 20 03:34:45 localhost podman[91713]: 2026-02-20 08:34:45.739336576 +0000 UTC m=+0.363448719 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, version=17.1.13, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:34:45 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:34:46 localhost systemd[1]: tmp-crun.IEiQUE.mount: Deactivated successfully. Feb 20 03:34:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:34:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:34:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:34:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:34:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:34:48 localhost systemd[1]: tmp-crun.alqB0j.mount: Deactivated successfully. Feb 20 03:34:48 localhost podman[91805]: 2026-02-20 08:34:48.457098539 +0000 UTC m=+0.091777158 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, distribution-scope=public) Feb 20 03:34:48 localhost podman[91805]: 2026-02-20 08:34:48.489911043 +0000 UTC m=+0.124589632 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, container_name=iscsid, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, release=1766032510, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team) Feb 20 03:34:48 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:34:48 localhost podman[91806]: 2026-02-20 08:34:48.502446487 +0000 UTC m=+0.132629586 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc.) Feb 20 03:34:48 localhost podman[91806]: 2026-02-20 08:34:48.516408409 +0000 UTC m=+0.146591518 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, tcib_managed=true, container_name=ovn_controller, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:34:48 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:34:48 localhost podman[91804]: 2026-02-20 08:34:48.567044239 +0000 UTC m=+0.203275459 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, release=1766032510, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:34:48 localhost podman[91804]: 2026-02-20 08:34:48.605228087 +0000 UTC m=+0.241459257 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z) Feb 20 03:34:48 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:34:48 localhost podman[91818]: 2026-02-20 08:34:48.60872832 +0000 UTC m=+0.231661725 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, tcib_managed=true, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:34:48 localhost podman[91808]: 2026-02-20 08:34:48.667784014 +0000 UTC m=+0.291772088 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5) Feb 20 03:34:48 localhost podman[91808]: 2026-02-20 08:34:48.675003167 +0000 UTC m=+0.298991251 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1766032510, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd) Feb 20 03:34:48 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:34:48 localhost podman[91818]: 2026-02-20 08:34:48.687913072 +0000 UTC m=+0.310846497 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, release=1766032510, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:34:48 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:34:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:34:52 localhost podman[91916]: 2026-02-20 08:34:52.445260494 +0000 UTC m=+0.084935896 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, vcs-type=git, release=1766032510, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 20 03:34:52 localhost podman[91916]: 2026-02-20 08:34:52.777972633 +0000 UTC m=+0.417647975 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2026-01-12T23:32:04Z, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:34:52 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:35:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:35:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:35:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:35:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:35:16 localhost podman[92019]: 2026-02-20 08:35:16.430932316 +0000 UTC m=+0.067696535 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, distribution-scope=public, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510) Feb 20 03:35:16 localhost podman[92021]: 2026-02-20 08:35:16.488158111 +0000 UTC m=+0.117497363 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true) Feb 20 03:35:16 localhost podman[92019]: 2026-02-20 08:35:16.508075572 +0000 UTC m=+0.144839781 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, batch=17.1_20260112.1, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z) Feb 20 03:35:16 localhost podman[92021]: 2026-02-20 08:35:16.539921651 +0000 UTC m=+0.169260863 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4) Feb 20 03:35:16 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:35:16 localhost podman[92020]: 2026-02-20 08:35:16.550525074 +0000 UTC m=+0.183789330 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:35:16 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:35:16 localhost podman[92020]: 2026-02-20 08:35:16.587839088 +0000 UTC m=+0.221103334 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, version=17.1.13, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-cron-container, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:35:16 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:35:16 localhost podman[92018]: 2026-02-20 08:35:16.6505689 +0000 UTC m=+0.285071939 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., container_name=metrics_qdr, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.13, url=https://www.redhat.com, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1) Feb 20 03:35:16 localhost sshd[92116]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:35:16 localhost podman[92018]: 2026-02-20 08:35:16.852362199 +0000 UTC m=+0.486865268 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, version=17.1.13) Feb 20 03:35:16 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:35:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:35:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:35:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:35:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:35:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:35:19 localhost systemd[1]: tmp-crun.GQ1r2P.mount: Deactivated successfully. Feb 20 03:35:19 localhost podman[92121]: 2026-02-20 08:35:19.472772741 +0000 UTC m=+0.096753481 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, container_name=ovn_controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:35:19 localhost systemd[1]: tmp-crun.OwbQtQ.mount: Deactivated successfully. Feb 20 03:35:19 localhost podman[92120]: 2026-02-20 08:35:19.513196358 +0000 UTC m=+0.142871309 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:35:19 localhost podman[92121]: 2026-02-20 08:35:19.517304708 +0000 UTC m=+0.141285498 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=ovn_controller, tcib_managed=true, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com) Feb 20 03:35:19 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:35:19 localhost podman[92122]: 2026-02-20 08:35:19.568449061 +0000 UTC m=+0.187640083 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, config_id=tripleo_step3, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, architecture=x86_64, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd) Feb 20 03:35:19 localhost podman[92120]: 2026-02-20 08:35:19.621845694 +0000 UTC m=+0.251520635 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.13, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 20 03:35:19 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:35:19 localhost podman[92119]: 2026-02-20 08:35:19.624158686 +0000 UTC m=+0.253188971 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, io.buildah.version=1.41.5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:35:19 localhost podman[92122]: 2026-02-20 08:35:19.706132841 +0000 UTC m=+0.325323933 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, container_name=collectd, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, release=1766032510, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:35:19 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:35:19 localhost podman[92134]: 2026-02-20 08:35:19.676663235 +0000 UTC m=+0.289317063 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, url=https://www.redhat.com, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step5, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z) Feb 20 03:35:19 localhost podman[92134]: 2026-02-20 08:35:19.759156535 +0000 UTC m=+0.371810433 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.buildah.version=1.41.5, url=https://www.redhat.com, config_id=tripleo_step5, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team) Feb 20 03:35:19 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:35:19 localhost podman[92119]: 2026-02-20 08:35:19.810666267 +0000 UTC m=+0.439696502 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, version=17.1.13) Feb 20 03:35:19 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:35:22 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:35:22 localhost recover_tripleo_nova_virtqemud[92229]: 63703 Feb 20 03:35:22 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:35:22 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:35:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:35:23 localhost podman[92230]: 2026-02-20 08:35:23.435733459 +0000 UTC m=+0.074502496 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 20 03:35:23 localhost podman[92230]: 2026-02-20 08:35:23.769067295 +0000 UTC m=+0.407836402 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, version=17.1.13, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:35:23 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:35:32 localhost sshd[92252]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:35:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:35:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:35:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:35:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:35:47 localhost podman[92254]: 2026-02-20 08:35:47.449833494 +0000 UTC m=+0.089128717 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.13, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Feb 20 03:35:47 localhost podman[92256]: 2026-02-20 08:35:47.505888198 +0000 UTC m=+0.140786334 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, version=17.1.13, release=1766032510, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, container_name=logrotate_crond) Feb 20 03:35:47 localhost podman[92256]: 2026-02-20 08:35:47.518824683 +0000 UTC m=+0.153722839 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.expose-services=) Feb 20 03:35:47 localhost podman[92257]: 2026-02-20 08:35:47.562842846 +0000 UTC m=+0.194790264 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.13, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=) Feb 20 03:35:47 localhost podman[92255]: 2026-02-20 08:35:47.610586339 +0000 UTC m=+0.247108598 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, version=17.1.13, batch=17.1_20260112.1, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:35:47 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:35:47 localhost podman[92254]: 2026-02-20 08:35:47.643702362 +0000 UTC m=+0.282997615 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.13, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, release=1766032510, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, tcib_managed=true, batch=17.1_20260112.1) Feb 20 03:35:47 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:35:47 localhost podman[92257]: 2026-02-20 08:35:47.663968322 +0000 UTC m=+0.295915740 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:35:47 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:35:47 localhost podman[92255]: 2026-02-20 08:35:47.685192008 +0000 UTC m=+0.321714257 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:35:47 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:35:48 localhost systemd[1]: tmp-crun.ffcqiB.mount: Deactivated successfully. Feb 20 03:35:50 localhost sshd[92353]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:35:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:35:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:35:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:35:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:35:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:35:50 localhost systemd[1]: tmp-crun.9KGf3e.mount: Deactivated successfully. Feb 20 03:35:50 localhost podman[92356]: 2026-02-20 08:35:50.507158842 +0000 UTC m=+0.138660858 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, config_id=tripleo_step4, release=1766032510, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, tcib_managed=true, managed_by=tripleo_ansible) Feb 20 03:35:50 localhost podman[92362]: 2026-02-20 08:35:50.473161146 +0000 UTC m=+0.097965923 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, distribution-scope=public, config_id=tripleo_step3, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, container_name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Feb 20 03:35:50 localhost podman[92354]: 2026-02-20 08:35:50.553867117 +0000 UTC m=+0.192324638 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, container_name=ovn_metadata_agent, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, batch=17.1_20260112.1, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible) Feb 20 03:35:50 localhost podman[92362]: 2026-02-20 08:35:50.557884834 +0000 UTC m=+0.182689611 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:35:50 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:35:50 localhost podman[92354]: 2026-02-20 08:35:50.597936141 +0000 UTC m=+0.236393712 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, distribution-scope=public) Feb 20 03:35:50 localhost podman[92355]: 2026-02-20 08:35:50.603297674 +0000 UTC m=+0.237280295 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Feb 20 03:35:50 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:35:50 localhost podman[92356]: 2026-02-20 08:35:50.608353079 +0000 UTC m=+0.239855015 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_id=tripleo_step4) Feb 20 03:35:50 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:35:50 localhost podman[92367]: 2026-02-20 08:35:50.529238891 +0000 UTC m=+0.151567182 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_id=tripleo_step5, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510) Feb 20 03:35:50 localhost podman[92355]: 2026-02-20 08:35:50.662175814 +0000 UTC m=+0.296158435 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_id=tripleo_step3, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid) Feb 20 03:35:50 localhost podman[92367]: 2026-02-20 08:35:50.662840322 +0000 UTC m=+0.285168593 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:35:50 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:35:50 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:35:51 localhost systemd[1]: tmp-crun.6reH88.mount: Deactivated successfully. Feb 20 03:35:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:35:54 localhost podman[92463]: 2026-02-20 08:35:54.439209767 +0000 UTC m=+0.079862270 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20260112.1, vcs-type=git, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, managed_by=tripleo_ansible, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:35:54 localhost podman[92463]: 2026-02-20 08:35:54.867930335 +0000 UTC m=+0.508582848 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, vcs-type=git) Feb 20 03:35:54 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:36:02 localhost sshd[92563]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:36:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:36:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:36:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:36:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:36:18 localhost podman[92566]: 2026-02-20 08:36:18.47123989 +0000 UTC m=+0.098402694 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=) Feb 20 03:36:18 localhost podman[92566]: 2026-02-20 08:36:18.502782581 +0000 UTC m=+0.129945395 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:36:18 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:36:18 localhost podman[92565]: 2026-02-20 08:36:18.505807562 +0000 UTC m=+0.135268347 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:36:18 localhost podman[92567]: 2026-02-20 08:36:18.570178807 +0000 UTC m=+0.193122428 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, release=1766032510, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:36:18 localhost podman[92567]: 2026-02-20 08:36:18.579951998 +0000 UTC m=+0.202895609 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, container_name=logrotate_crond) Feb 20 03:36:18 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:36:18 localhost podman[92573]: 2026-02-20 08:36:18.626064467 +0000 UTC m=+0.242015322 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:36:18 localhost podman[92573]: 2026-02-20 08:36:18.650771186 +0000 UTC m=+0.266722081 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, config_id=tripleo_step4, version=17.1.13, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, url=https://www.redhat.com) Feb 20 03:36:18 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:36:18 localhost podman[92565]: 2026-02-20 08:36:18.699927057 +0000 UTC m=+0.329387822 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.expose-services=, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, container_name=metrics_qdr, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1766032510) Feb 20 03:36:18 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:36:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:36:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:36:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:36:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:36:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:36:21 localhost podman[92665]: 2026-02-20 08:36:21.452288896 +0000 UTC m=+0.091447058 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:36:21 localhost podman[92666]: 2026-02-20 08:36:21.51055308 +0000 UTC m=+0.144932445 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:36:21 localhost podman[92665]: 2026-02-20 08:36:21.520338401 +0000 UTC m=+0.159496623 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git) Feb 20 03:36:21 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:36:21 localhost systemd[1]: tmp-crun.dfTS1A.mount: Deactivated successfully. Feb 20 03:36:21 localhost podman[92667]: 2026-02-20 08:36:21.575443369 +0000 UTC m=+0.207611635 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, version=17.1.13, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Feb 20 03:36:21 localhost podman[92673]: 2026-02-20 08:36:21.625317449 +0000 UTC m=+0.252448711 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, distribution-scope=public, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, architecture=x86_64, container_name=collectd) Feb 20 03:36:21 localhost podman[92673]: 2026-02-20 08:36:21.635347917 +0000 UTC m=+0.262479169 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.buildah.version=1.41.5, version=17.1.13) Feb 20 03:36:21 localhost podman[92679]: 2026-02-20 08:36:21.671533651 +0000 UTC m=+0.292992232 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step5, container_name=nova_compute, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, build-date=2026-01-12T23:32:04Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:36:21 localhost podman[92679]: 2026-02-20 08:36:21.69697616 +0000 UTC m=+0.318434701 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, container_name=nova_compute, vcs-type=git) Feb 20 03:36:21 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:36:21 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:36:21 localhost podman[92666]: 2026-02-20 08:36:21.748889153 +0000 UTC m=+0.383268488 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, tcib_managed=true, release=1766032510, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, distribution-scope=public, io.openshift.expose-services=, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20260112.1, vendor=Red Hat, Inc.) Feb 20 03:36:21 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:36:21 localhost podman[92667]: 2026-02-20 08:36:21.804502996 +0000 UTC m=+0.436671272 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:36:21 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:36:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:36:25 localhost podman[92778]: 2026-02-20 08:36:25.438409825 +0000 UTC m=+0.078214596 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:36:25 localhost podman[92778]: 2026-02-20 08:36:25.766733567 +0000 UTC m=+0.406538378 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, tcib_managed=true, batch=17.1_20260112.1, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:36:25 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:36:40 localhost sshd[92800]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:36:48 localhost sshd[92802]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:36:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:36:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:36:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:36:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:36:49 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:36:49 localhost recover_tripleo_nova_virtqemud[92831]: 63703 Feb 20 03:36:49 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:36:49 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:36:49 localhost systemd[1]: tmp-crun.qqT8Z9.mount: Deactivated successfully. Feb 20 03:36:49 localhost systemd[1]: tmp-crun.2iOtCi.mount: Deactivated successfully. Feb 20 03:36:49 localhost podman[92806]: 2026-02-20 08:36:49.473082151 +0000 UTC m=+0.100791138 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, tcib_managed=true, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:36:49 localhost podman[92805]: 2026-02-20 08:36:49.440660126 +0000 UTC m=+0.071906677 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Feb 20 03:36:49 localhost podman[92807]: 2026-02-20 08:36:49.501525429 +0000 UTC m=+0.125942868 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, container_name=ceilometer_agent_ipmi) Feb 20 03:36:49 localhost podman[92807]: 2026-02-20 08:36:49.529635568 +0000 UTC m=+0.154052967 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., batch=17.1_20260112.1, architecture=x86_64, vcs-type=git, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:36:49 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:36:49 localhost podman[92804]: 2026-02-20 08:36:49.545410589 +0000 UTC m=+0.177187585 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 20 03:36:49 localhost podman[92805]: 2026-02-20 08:36:49.572913401 +0000 UTC m=+0.204159932 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1766032510, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc.) Feb 20 03:36:49 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:36:49 localhost podman[92806]: 2026-02-20 08:36:49.655040991 +0000 UTC m=+0.282749968 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, tcib_managed=true, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, build-date=2026-01-12T22:10:15Z, architecture=x86_64, version=17.1.13, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=) Feb 20 03:36:49 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:36:49 localhost podman[92804]: 2026-02-20 08:36:49.777833075 +0000 UTC m=+0.409610131 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=metrics_qdr, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 03:36:49 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:36:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:36:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:36:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:36:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:36:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:36:52 localhost podman[92909]: 2026-02-20 08:36:52.456256802 +0000 UTC m=+0.090339370 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 20 03:36:52 localhost podman[92908]: 2026-02-20 08:36:52.514030682 +0000 UTC m=+0.150024750 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.13, config_id=tripleo_step4) Feb 20 03:36:52 localhost podman[92908]: 2026-02-20 08:36:52.559027111 +0000 UTC m=+0.195021179 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5) Feb 20 03:36:52 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:36:52 localhost podman[92909]: 2026-02-20 08:36:52.596642925 +0000 UTC m=+0.230725493 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.buildah.version=1.41.5, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, container_name=iscsid, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z) Feb 20 03:36:52 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:36:52 localhost podman[92910]: 2026-02-20 08:36:52.562763892 +0000 UTC m=+0.191061854 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.13, release=1766032510, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Feb 20 03:36:52 localhost podman[92911]: 2026-02-20 08:36:52.67824776 +0000 UTC m=+0.303308906 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 20 03:36:52 localhost podman[92911]: 2026-02-20 08:36:52.686545231 +0000 UTC m=+0.311606367 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, container_name=collectd, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 20 03:36:52 localhost podman[92910]: 2026-02-20 08:36:52.697052271 +0000 UTC m=+0.325350263 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z) Feb 20 03:36:52 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:36:52 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:36:52 localhost podman[92922]: 2026-02-20 08:36:52.771956297 +0000 UTC m=+0.392437322 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, distribution-scope=public, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., release=1766032510, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=) Feb 20 03:36:52 localhost podman[92922]: 2026-02-20 08:36:52.799964614 +0000 UTC m=+0.420445679 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:32:04Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 20 03:36:52 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:36:53 localhost systemd[1]: tmp-crun.NZRCMn.mount: Deactivated successfully. Feb 20 03:36:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:36:56 localhost podman[93015]: 2026-02-20 08:36:56.440504479 +0000 UTC m=+0.080268381 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., release=1766032510, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5) Feb 20 03:36:56 localhost sshd[93037]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:36:56 localhost podman[93015]: 2026-02-20 08:36:56.805834077 +0000 UTC m=+0.445598019 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, architecture=x86_64, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:36:56 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:36:58 localhost sshd[93103]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:37:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:37:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:37:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:37:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:37:20 localhost podman[93121]: 2026-02-20 08:37:20.460056832 +0000 UTC m=+0.084854153 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git) Feb 20 03:37:20 localhost systemd[1]: tmp-crun.V5FuJ2.mount: Deactivated successfully. Feb 20 03:37:20 localhost podman[93121]: 2026-02-20 08:37:20.516472705 +0000 UTC m=+0.141270016 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:37:20 localhost podman[93128]: 2026-02-20 08:37:20.51664589 +0000 UTC m=+0.132750219 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, distribution-scope=public, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1766032510) Feb 20 03:37:20 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:37:20 localhost podman[93127]: 2026-02-20 08:37:20.482602413 +0000 UTC m=+0.101378454 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5) Feb 20 03:37:20 localhost podman[93120]: 2026-02-20 08:37:20.575345485 +0000 UTC m=+0.206794894 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, vcs-type=git, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:37:20 localhost podman[93128]: 2026-02-20 08:37:20.600142476 +0000 UTC m=+0.216246755 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com) Feb 20 03:37:20 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:37:20 localhost podman[93127]: 2026-02-20 08:37:20.618062974 +0000 UTC m=+0.236839035 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, distribution-scope=public, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vcs-type=git, version=17.1.13, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:37:20 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:37:20 localhost podman[93120]: 2026-02-20 08:37:20.776840356 +0000 UTC m=+0.408289815 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, distribution-scope=public, version=17.1.13, vcs-type=git, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:37:20 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:37:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:37:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:37:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:37:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:37:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:37:23 localhost systemd[1]: tmp-crun.SoqufR.mount: Deactivated successfully. Feb 20 03:37:23 localhost podman[93223]: 2026-02-20 08:37:23.454947465 +0000 UTC m=+0.092074245 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, vendor=Red Hat, Inc.) Feb 20 03:37:23 localhost systemd[1]: tmp-crun.SkS4tw.mount: Deactivated successfully. Feb 20 03:37:23 localhost podman[93223]: 2026-02-20 08:37:23.504809465 +0000 UTC m=+0.141936305 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5) Feb 20 03:37:23 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:37:23 localhost podman[93227]: 2026-02-20 08:37:23.507986009 +0000 UTC m=+0.138298437 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, config_id=tripleo_step5, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Feb 20 03:37:23 localhost podman[93226]: 2026-02-20 08:37:23.574529003 +0000 UTC m=+0.206962238 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, container_name=collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, batch=17.1_20260112.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:37:23 localhost podman[93227]: 2026-02-20 08:37:23.593749586 +0000 UTC m=+0.224061974 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, container_name=nova_compute, tcib_managed=true, release=1766032510, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:37:23 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:37:23 localhost podman[93226]: 2026-02-20 08:37:23.610872532 +0000 UTC m=+0.243305737 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=collectd, version=17.1.13, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 20 03:37:23 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:37:23 localhost podman[93224]: 2026-02-20 08:37:23.613710018 +0000 UTC m=+0.249710007 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, version=17.1.13) Feb 20 03:37:23 localhost podman[93225]: 2026-02-20 08:37:23.6685904 +0000 UTC m=+0.300721267 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:37:23 localhost podman[93225]: 2026-02-20 08:37:23.689036106 +0000 UTC m=+0.321166933 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, url=https://www.redhat.com, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.13, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 20 03:37:23 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:37:23 localhost podman[93224]: 2026-02-20 08:37:23.743555819 +0000 UTC m=+0.379555788 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.5) Feb 20 03:37:23 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:37:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:37:27 localhost podman[93333]: 2026-02-20 08:37:27.432802562 +0000 UTC m=+0.076590903 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, managed_by=tripleo_ansible) Feb 20 03:37:27 localhost podman[93333]: 2026-02-20 08:37:27.798711906 +0000 UTC m=+0.442500217 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1766032510, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:37:27 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:37:35 localhost sshd[93356]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:37:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:37:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:37:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:37:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:37:51 localhost systemd[1]: tmp-crun.mihhEF.mount: Deactivated successfully. Feb 20 03:37:51 localhost podman[93359]: 2026-02-20 08:37:51.466978133 +0000 UTC m=+0.101953619 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:37:51 localhost podman[93361]: 2026-02-20 08:37:51.512552908 +0000 UTC m=+0.141742660 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Feb 20 03:37:51 localhost podman[93359]: 2026-02-20 08:37:51.523698865 +0000 UTC m=+0.158674331 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, release=1766032510, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible) Feb 20 03:37:51 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:37:51 localhost podman[93361]: 2026-02-20 08:37:51.542708772 +0000 UTC m=+0.171898534 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:37:51 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:37:51 localhost podman[93358]: 2026-02-20 08:37:51.606799829 +0000 UTC m=+0.242192876 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-type=git, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step1, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:37:51 localhost podman[93360]: 2026-02-20 08:37:51.665082723 +0000 UTC m=+0.294505241 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, release=1766032510, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20260112.1, vcs-type=git, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z) Feb 20 03:37:51 localhost podman[93360]: 2026-02-20 08:37:51.67281986 +0000 UTC m=+0.302242408 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, tcib_managed=true, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20260112.1, vcs-type=git) Feb 20 03:37:51 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:37:51 localhost podman[93358]: 2026-02-20 08:37:51.863823681 +0000 UTC m=+0.499216758 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, io.openshift.expose-services=, url=https://www.redhat.com, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:37:51 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:37:52 localhost systemd[1]: tmp-crun.p34pon.mount: Deactivated successfully. Feb 20 03:37:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:37:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:37:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:37:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:37:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:37:54 localhost podman[93458]: 2026-02-20 08:37:54.454409298 +0000 UTC m=+0.092325552 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, version=17.1.13, release=1766032510, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible) Feb 20 03:37:54 localhost podman[93458]: 2026-02-20 08:37:54.467846707 +0000 UTC m=+0.105762971 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true) Feb 20 03:37:54 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:37:54 localhost podman[93459]: 2026-02-20 08:37:54.520022237 +0000 UTC m=+0.153883433 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, architecture=x86_64, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-type=git) Feb 20 03:37:54 localhost podman[93459]: 2026-02-20 08:37:54.549060721 +0000 UTC m=+0.182921927 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, container_name=ovn_controller, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git) Feb 20 03:37:54 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:37:54 localhost podman[93457]: 2026-02-20 08:37:54.609589634 +0000 UTC m=+0.248638028 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 20 03:37:54 localhost podman[93467]: 2026-02-20 08:37:54.659543216 +0000 UTC m=+0.287879655 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:37:54 localhost podman[93457]: 2026-02-20 08:37:54.666988355 +0000 UTC m=+0.306036749 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64) Feb 20 03:37:54 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:37:54 localhost podman[93467]: 2026-02-20 08:37:54.695792973 +0000 UTC m=+0.324129472 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.5, container_name=nova_compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, distribution-scope=public, config_id=tripleo_step5, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Feb 20 03:37:54 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:37:54 localhost podman[93460]: 2026-02-20 08:37:54.71259786 +0000 UTC m=+0.344516634 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, version=17.1.13, distribution-scope=public, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, vcs-type=git, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 20 03:37:54 localhost podman[93460]: 2026-02-20 08:37:54.75081658 +0000 UTC m=+0.382735314 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, maintainer=OpenStack TripleO Team, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-collectd) Feb 20 03:37:54 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:37:58 localhost sshd[93567]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:37:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:37:58 localhost podman[93569]: 2026-02-20 08:37:58.453684056 +0000 UTC m=+0.086059606 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.5, vendor=Red Hat, Inc., container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:37:58 localhost podman[93569]: 2026-02-20 08:37:58.802693699 +0000 UTC m=+0.435069249 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1766032510, io.buildah.version=1.41.5, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public) Feb 20 03:37:58 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:38:07 localhost sshd[93668]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:38:22 localhost sshd[93670]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:38:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:38:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:38:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:38:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:38:22 localhost systemd[1]: tmp-crun.QdW1Ja.mount: Deactivated successfully. Feb 20 03:38:22 localhost podman[93674]: 2026-02-20 08:38:22.473553177 +0000 UTC m=+0.095548478 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.5, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team) Feb 20 03:38:22 localhost podman[93674]: 2026-02-20 08:38:22.49502477 +0000 UTC m=+0.117020081 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 20 03:38:22 localhost systemd[1]: tmp-crun.HmuEWv.mount: Deactivated successfully. Feb 20 03:38:22 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:38:22 localhost podman[93671]: 2026-02-20 08:38:22.516870161 +0000 UTC m=+0.147824860 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_id=tripleo_step1, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:38:22 localhost podman[93673]: 2026-02-20 08:38:22.560905816 +0000 UTC m=+0.185094725 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, tcib_managed=true, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, batch=17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-cron-container) Feb 20 03:38:22 localhost podman[93673]: 2026-02-20 08:38:22.57271671 +0000 UTC m=+0.196905609 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1766032510, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible) Feb 20 03:38:22 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:38:22 localhost podman[93672]: 2026-02-20 08:38:22.624514961 +0000 UTC m=+0.252056980 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, managed_by=tripleo_ansible, version=17.1.13, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_compute, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:38:22 localhost podman[93672]: 2026-02-20 08:38:22.670935018 +0000 UTC m=+0.298476997 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, batch=17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, version=17.1.13, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64) Feb 20 03:38:22 localhost podman[93671]: 2026-02-20 08:38:22.680211846 +0000 UTC m=+0.311166505 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 03:38:22 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:38:22 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:38:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:38:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:38:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:38:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:38:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:38:25 localhost podman[93771]: 2026-02-20 08:38:25.455512766 +0000 UTC m=+0.090717539 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public) Feb 20 03:38:25 localhost systemd[1]: tmp-crun.EnEnq5.mount: Deactivated successfully. Feb 20 03:38:25 localhost podman[93772]: 2026-02-20 08:38:25.510105991 +0000 UTC m=+0.142498939 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid) Feb 20 03:38:25 localhost podman[93772]: 2026-02-20 08:38:25.520829897 +0000 UTC m=+0.153222845 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, container_name=iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Feb 20 03:38:25 localhost podman[93771]: 2026-02-20 08:38:25.519970464 +0000 UTC m=+0.155175237 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:38:25 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:38:25 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:38:25 localhost podman[93773]: 2026-02-20 08:38:25.574260472 +0000 UTC m=+0.203726842 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, version=17.1.13, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Feb 20 03:38:25 localhost podman[93774]: 2026-02-20 08:38:25.629985587 +0000 UTC m=+0.258017270 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc.) Feb 20 03:38:25 localhost podman[93774]: 2026-02-20 08:38:25.641703469 +0000 UTC m=+0.269735142 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:38:25 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:38:25 localhost podman[93773]: 2026-02-20 08:38:25.659835972 +0000 UTC m=+0.289302342 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, url=https://www.redhat.com) Feb 20 03:38:25 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Deactivated successfully. Feb 20 03:38:25 localhost podman[93780]: 2026-02-20 08:38:25.728265807 +0000 UTC m=+0.352080437 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, version=17.1.13, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, architecture=x86_64) Feb 20 03:38:25 localhost podman[93780]: 2026-02-20 08:38:25.759689254 +0000 UTC m=+0.383503844 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, url=https://www.redhat.com) Feb 20 03:38:25 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:38:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:38:29 localhost podman[93886]: 2026-02-20 08:38:29.444731804 +0000 UTC m=+0.082934431 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:38:29 localhost podman[93886]: 2026-02-20 08:38:29.827839017 +0000 UTC m=+0.466041634 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, version=17.1.13, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, release=1766032510) Feb 20 03:38:29 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:38:31 localhost sshd[93909]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:38:52 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:38:52 localhost recover_tripleo_nova_virtqemud[93912]: 63703 Feb 20 03:38:52 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:38:52 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:38:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:38:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:38:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:38:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:38:53 localhost systemd[1]: tmp-crun.GRumri.mount: Deactivated successfully. Feb 20 03:38:53 localhost podman[93916]: 2026-02-20 08:38:53.445600887 +0000 UTC m=+0.077851206 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, release=1766032510, distribution-scope=public, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:38:53 localhost systemd[1]: tmp-crun.ij1dWc.mount: Deactivated successfully. Feb 20 03:38:53 localhost podman[93915]: 2026-02-20 08:38:53.46373231 +0000 UTC m=+0.095071635 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, release=1766032510, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, container_name=logrotate_crond) Feb 20 03:38:53 localhost podman[93913]: 2026-02-20 08:38:53.499291099 +0000 UTC m=+0.137250680 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, vcs-type=git, config_id=tripleo_step1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, distribution-scope=public, version=17.1.13, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:38:53 localhost podman[93914]: 2026-02-20 08:38:53.551252314 +0000 UTC m=+0.187703534 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, managed_by=tripleo_ansible) Feb 20 03:38:53 localhost podman[93916]: 2026-02-20 08:38:53.568115804 +0000 UTC m=+0.200366143 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible) Feb 20 03:38:53 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:38:53 localhost podman[93914]: 2026-02-20 08:38:53.610853323 +0000 UTC m=+0.247304493 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public) Feb 20 03:38:53 localhost podman[93915]: 2026-02-20 08:38:53.622559725 +0000 UTC m=+0.253899120 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, tcib_managed=true, version=17.1.13, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, release=1766032510, vcs-type=git) Feb 20 03:38:53 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:38:53 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:38:53 localhost podman[93913]: 2026-02-20 08:38:53.68577921 +0000 UTC m=+0.323738861 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510) Feb 20 03:38:53 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:38:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:38:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:38:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:38:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:38:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:38:56 localhost podman[94017]: 2026-02-20 08:38:56.446629704 +0000 UTC m=+0.079507270 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, version=17.1.13, managed_by=tripleo_ansible, container_name=collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Feb 20 03:38:56 localhost systemd[1]: tmp-crun.OcA8R2.mount: Deactivated successfully. Feb 20 03:38:56 localhost podman[94015]: 2026-02-20 08:38:56.459891288 +0000 UTC m=+0.093307098 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:38:56 localhost podman[94017]: 2026-02-20 08:38:56.485776618 +0000 UTC m=+0.118654114 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vendor=Red Hat, Inc., io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, container_name=collectd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:38:56 localhost podman[94015]: 2026-02-20 08:38:56.500920331 +0000 UTC m=+0.134336171 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step3, version=17.1.13) Feb 20 03:38:56 localhost systemd[1]: tmp-crun.5PMQY2.mount: Deactivated successfully. Feb 20 03:38:56 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:38:56 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:38:56 localhost podman[94016]: 2026-02-20 08:38:56.554445619 +0000 UTC m=+0.187802467 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:38:56 localhost podman[94016]: 2026-02-20 08:38:56.602805458 +0000 UTC m=+0.236162366 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, release=1766032510, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public) Feb 20 03:38:56 localhost podman[94016]: unhealthy Feb 20 03:38:56 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:38:56 localhost podman[94014]: 2026-02-20 08:38:56.613733799 +0000 UTC m=+0.249002489 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true) Feb 20 03:38:56 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:38:56 localhost podman[94018]: 2026-02-20 08:38:56.506000197 +0000 UTC m=+0.133738596 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vendor=Red Hat, Inc.) Feb 20 03:38:56 localhost podman[94014]: 2026-02-20 08:38:56.661671707 +0000 UTC m=+0.296940397 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, version=17.1.13, container_name=ovn_metadata_agent, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5) Feb 20 03:38:56 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:38:56 localhost podman[94018]: 2026-02-20 08:38:56.691173313 +0000 UTC m=+0.318911702 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 20 03:38:56 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:39:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:39:00 localhost systemd[1]: tmp-crun.dmd7bn.mount: Deactivated successfully. Feb 20 03:39:00 localhost podman[94131]: 2026-02-20 08:39:00.446855288 +0000 UTC m=+0.086802055 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, distribution-scope=public, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, release=1766032510, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:39:00 localhost podman[94131]: 2026-02-20 08:39:00.83820939 +0000 UTC m=+0.478156157 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:39:00 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:39:20 localhost sshd[94280]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:39:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:39:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:39:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:39:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:39:24 localhost podman[94283]: 2026-02-20 08:39:24.452288413 +0000 UTC m=+0.089597959 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1766032510, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:39:24 localhost podman[94283]: 2026-02-20 08:39:24.479560001 +0000 UTC m=+0.116869537 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:39:24 localhost systemd[1]: tmp-crun.tdNpen.mount: Deactivated successfully. Feb 20 03:39:24 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:39:24 localhost podman[94282]: 2026-02-20 08:39:24.504533096 +0000 UTC m=+0.141721309 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, tcib_managed=true, distribution-scope=public, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Feb 20 03:39:24 localhost podman[94284]: 2026-02-20 08:39:24.555996928 +0000 UTC m=+0.190044937 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-cron-container, release=1766032510, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, container_name=logrotate_crond, batch=17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:39:24 localhost podman[94284]: 2026-02-20 08:39:24.57071575 +0000 UTC m=+0.204763749 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-type=git, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=) Feb 20 03:39:24 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:39:24 localhost podman[94285]: 2026-02-20 08:39:24.677594259 +0000 UTC m=+0.307693482 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, distribution-scope=public, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:39:24 localhost podman[94285]: 2026-02-20 08:39:24.703698325 +0000 UTC m=+0.333797548 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 20 03:39:24 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:39:24 localhost podman[94282]: 2026-02-20 08:39:24.737676821 +0000 UTC m=+0.374864984 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, batch=17.1_20260112.1, container_name=metrics_qdr, version=17.1.13, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, distribution-scope=public) Feb 20 03:39:24 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:39:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:39:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:39:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:39:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:39:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:39:27 localhost systemd[1]: tmp-crun.QBfIDF.mount: Deactivated successfully. Feb 20 03:39:27 localhost podman[94395]: 2026-02-20 08:39:27.482315555 +0000 UTC m=+0.107811545 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., batch=17.1_20260112.1, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:39:27 localhost podman[94382]: 2026-02-20 08:39:27.454789641 +0000 UTC m=+0.092197399 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, io.buildah.version=1.41.5, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 20 03:39:27 localhost podman[94383]: 2026-02-20 08:39:27.52042351 +0000 UTC m=+0.154159730 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, config_id=tripleo_step4, distribution-scope=public, vcs-type=git) Feb 20 03:39:27 localhost podman[94381]: 2026-02-20 08:39:27.566105769 +0000 UTC m=+0.206339322 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-type=git, version=17.1.13, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:39:27 localhost podman[94387]: 2026-02-20 08:39:27.614695084 +0000 UTC m=+0.245466755 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3) Feb 20 03:39:27 localhost podman[94387]: 2026-02-20 08:39:27.626681153 +0000 UTC m=+0.257452814 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, version=17.1.13, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, container_name=collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public) Feb 20 03:39:27 localhost podman[94383]: 2026-02-20 08:39:27.636195837 +0000 UTC m=+0.269932107 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, release=1766032510, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public) Feb 20 03:39:27 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:39:27 localhost podman[94383]: unhealthy Feb 20 03:39:27 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:39:27 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:39:27 localhost podman[94395]: 2026-02-20 08:39:27.669978937 +0000 UTC m=+0.295474917 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:39:27 localhost podman[94381]: 2026-02-20 08:39:27.681506715 +0000 UTC m=+0.321740318 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.13, url=https://www.redhat.com, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:39:27 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:39:27 localhost podman[94382]: 2026-02-20 08:39:27.6873302 +0000 UTC m=+0.324737968 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, container_name=iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:39:27 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Deactivated successfully. Feb 20 03:39:27 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:39:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:39:31 localhost podman[94497]: 2026-02-20 08:39:31.437016624 +0000 UTC m=+0.080201169 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, vcs-type=git, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, batch=17.1_20260112.1) Feb 20 03:39:31 localhost podman[94497]: 2026-02-20 08:39:31.843761117 +0000 UTC m=+0.486945582 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, container_name=nova_migration_target, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git) Feb 20 03:39:31 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:39:40 localhost sshd[94520]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:39:45 localhost sshd[94522]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:39:47 localhost sshd[94524]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 03:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5073 writes, 22K keys, 5073 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5073 writes, 653 syncs, 7.77 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 03:39:48 localhost sshd[94526]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 03:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5513 writes, 24K keys, 5513 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5513 writes, 750 syncs, 7.35 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 03:39:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:39:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:39:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:39:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:39:55 localhost systemd[1]: tmp-crun.e6zY7j.mount: Deactivated successfully. Feb 20 03:39:55 localhost podman[94529]: 2026-02-20 08:39:55.456459741 +0000 UTC m=+0.094546861 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:39:55 localhost podman[94529]: 2026-02-20 08:39:55.483723807 +0000 UTC m=+0.121810927 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute) Feb 20 03:39:55 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:39:55 localhost podman[94530]: 2026-02-20 08:39:55.502622881 +0000 UTC m=+0.138650976 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, url=https://www.redhat.com, version=17.1.13, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, batch=17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-cron-container) Feb 20 03:39:55 localhost podman[94530]: 2026-02-20 08:39:55.544749264 +0000 UTC m=+0.180777409 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, config_id=tripleo_step4) Feb 20 03:39:55 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:39:55 localhost podman[94528]: 2026-02-20 08:39:55.545933496 +0000 UTC m=+0.185356732 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:39:55 localhost podman[94531]: 2026-02-20 08:39:55.5997065 +0000 UTC m=+0.235174560 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13) Feb 20 03:39:55 localhost podman[94531]: 2026-02-20 08:39:55.632062312 +0000 UTC m=+0.267530422 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-type=git, release=1766032510, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true) Feb 20 03:39:55 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:39:55 localhost podman[94528]: 2026-02-20 08:39:55.770811711 +0000 UTC m=+0.410234887 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public) Feb 20 03:39:55 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:39:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:39:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:39:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:39:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:39:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:39:58 localhost systemd[1]: tmp-crun.gjoJyM.mount: Deactivated successfully. Feb 20 03:39:58 localhost podman[94630]: 2026-02-20 08:39:58.457064677 +0000 UTC m=+0.094054809 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, container_name=iscsid, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, architecture=x86_64, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 20 03:39:58 localhost podman[94630]: 2026-02-20 08:39:58.46996581 +0000 UTC m=+0.106955932 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, distribution-scope=public, container_name=iscsid, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510) Feb 20 03:39:58 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:39:58 localhost podman[94632]: 2026-02-20 08:39:58.51797232 +0000 UTC m=+0.147163734 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, release=1766032510, config_id=tripleo_step3, container_name=collectd, distribution-scope=public, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible) Feb 20 03:39:58 localhost podman[94629]: 2026-02-20 08:39:58.551765801 +0000 UTC m=+0.190771457 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:39:58 localhost podman[94629]: 2026-02-20 08:39:58.569852163 +0000 UTC m=+0.208857819 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc.) Feb 20 03:39:58 localhost podman[94629]: unhealthy Feb 20 03:39:58 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:39:58 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:39:58 localhost podman[94631]: 2026-02-20 08:39:58.61476921 +0000 UTC m=+0.247152679 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, url=https://www.redhat.com, release=1766032510, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 20 03:39:58 localhost podman[94632]: 2026-02-20 08:39:58.630861179 +0000 UTC m=+0.260052603 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1766032510, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 20 03:39:58 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:39:58 localhost podman[94631]: 2026-02-20 08:39:58.658865636 +0000 UTC m=+0.291249075 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.expose-services=, release=1766032510, vcs-type=git, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:39:58 localhost podman[94631]: unhealthy Feb 20 03:39:58 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:39:58 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:39:58 localhost podman[94638]: 2026-02-20 08:39:58.71228735 +0000 UTC m=+0.340399416 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, tcib_managed=true) Feb 20 03:39:58 localhost podman[94638]: 2026-02-20 08:39:58.739999579 +0000 UTC m=+0.368111605 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, container_name=nova_compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510) Feb 20 03:39:58 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:40:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:40:02 localhost podman[94733]: 2026-02-20 08:40:02.447292822 +0000 UTC m=+0.085331115 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, container_name=nova_migration_target, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step4) Feb 20 03:40:02 localhost podman[94733]: 2026-02-20 08:40:02.920878987 +0000 UTC m=+0.558917270 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, release=1766032510, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z) Feb 20 03:40:02 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:40:10 localhost podman[94889]: Feb 20 03:40:10 localhost podman[94889]: 2026-02-20 08:40:10.736485064 +0000 UTC m=+0.061260224 container create c49d9d21346d19170e50cfa3d7ebdb2c5844071f8dee0b46830d24ed295c467b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_panini, description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, release=1770267347, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True) Feb 20 03:40:10 localhost systemd[1]: Started libpod-conmon-c49d9d21346d19170e50cfa3d7ebdb2c5844071f8dee0b46830d24ed295c467b.scope. Feb 20 03:40:10 localhost systemd[1]: Started libcrun container. Feb 20 03:40:10 localhost podman[94889]: 2026-02-20 08:40:10.806180242 +0000 UTC m=+0.130955432 container init c49d9d21346d19170e50cfa3d7ebdb2c5844071f8dee0b46830d24ed295c467b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_panini, CEPH_POINT_RELEASE=, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 03:40:10 localhost podman[94889]: 2026-02-20 08:40:10.707291346 +0000 UTC m=+0.032066586 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 03:40:10 localhost podman[94889]: 2026-02-20 08:40:10.817378851 +0000 UTC m=+0.142154021 container start c49d9d21346d19170e50cfa3d7ebdb2c5844071f8dee0b46830d24ed295c467b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_panini, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_CLEAN=True, vendor=Red Hat, Inc., architecture=x86_64) Feb 20 03:40:10 localhost podman[94889]: 2026-02-20 08:40:10.817653078 +0000 UTC m=+0.142428278 container attach c49d9d21346d19170e50cfa3d7ebdb2c5844071f8dee0b46830d24ed295c467b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_panini, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc.) Feb 20 03:40:10 localhost musing_panini[94904]: 167 167 Feb 20 03:40:10 localhost systemd[1]: libpod-c49d9d21346d19170e50cfa3d7ebdb2c5844071f8dee0b46830d24ed295c467b.scope: Deactivated successfully. Feb 20 03:40:10 localhost podman[94889]: 2026-02-20 08:40:10.822831336 +0000 UTC m=+0.147606556 container died c49d9d21346d19170e50cfa3d7ebdb2c5844071f8dee0b46830d24ed295c467b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_panini, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.42.2, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1770267347, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, version=7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 03:40:10 localhost podman[94909]: 2026-02-20 08:40:10.904697548 +0000 UTC m=+0.072736970 container remove c49d9d21346d19170e50cfa3d7ebdb2c5844071f8dee0b46830d24ed295c467b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_panini, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, ceph=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 03:40:10 localhost systemd[1]: libpod-conmon-c49d9d21346d19170e50cfa3d7ebdb2c5844071f8dee0b46830d24ed295c467b.scope: Deactivated successfully. Feb 20 03:40:11 localhost podman[94930]: Feb 20 03:40:11 localhost podman[94930]: 2026-02-20 08:40:11.116423672 +0000 UTC m=+0.068575689 container create 8d6d8dda658277d0d71f7a330a7f272fa748eb6b00d9cd81bf795d9a94239612 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_poincare, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.openshift.expose-services=, release=1770267347, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64) Feb 20 03:40:11 localhost systemd[1]: Started libpod-conmon-8d6d8dda658277d0d71f7a330a7f272fa748eb6b00d9cd81bf795d9a94239612.scope. Feb 20 03:40:11 localhost systemd[1]: Started libcrun container. Feb 20 03:40:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51d313d3d2a1f8c49202de997ff76106f4cdbf7f2c615fe3f3d530f7a40e90/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 03:40:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51d313d3d2a1f8c49202de997ff76106f4cdbf7f2c615fe3f3d530f7a40e90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 03:40:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51d313d3d2a1f8c49202de997ff76106f4cdbf7f2c615fe3f3d530f7a40e90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 03:40:11 localhost podman[94930]: 2026-02-20 08:40:11.179530114 +0000 UTC m=+0.131682151 container init 8d6d8dda658277d0d71f7a330a7f272fa748eb6b00d9cd81bf795d9a94239612 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_poincare, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.openshift.expose-services=, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, release=1770267347, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 03:40:11 localhost podman[94930]: 2026-02-20 08:40:11.081712746 +0000 UTC m=+0.033864803 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 03:40:11 localhost podman[94930]: 2026-02-20 08:40:11.189934402 +0000 UTC m=+0.142086449 container start 8d6d8dda658277d0d71f7a330a7f272fa748eb6b00d9cd81bf795d9a94239612 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_poincare, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, version=7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, build-date=2026-02-09T10:25:24Z) Feb 20 03:40:11 localhost podman[94930]: 2026-02-20 08:40:11.19026339 +0000 UTC m=+0.142415427 container attach 8d6d8dda658277d0d71f7a330a7f272fa748eb6b00d9cd81bf795d9a94239612 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_poincare, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, release=1770267347, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64) Feb 20 03:40:11 localhost systemd[1]: var-lib-containers-storage-overlay-766b25faa8dabfcbbde92e27bbcf7224b97243cbec536d9a9d0bc86bd7356172-merged.mount: Deactivated successfully. Feb 20 03:40:12 localhost silly_poincare[94945]: [ Feb 20 03:40:12 localhost silly_poincare[94945]: { Feb 20 03:40:12 localhost silly_poincare[94945]: "available": false, Feb 20 03:40:12 localhost silly_poincare[94945]: "ceph_device": false, Feb 20 03:40:12 localhost silly_poincare[94945]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 20 03:40:12 localhost silly_poincare[94945]: "lsm_data": {}, Feb 20 03:40:12 localhost silly_poincare[94945]: "lvs": [], Feb 20 03:40:12 localhost silly_poincare[94945]: "path": "/dev/sr0", Feb 20 03:40:12 localhost silly_poincare[94945]: "rejected_reasons": [ Feb 20 03:40:12 localhost silly_poincare[94945]: "Insufficient space (<5GB)", Feb 20 03:40:12 localhost silly_poincare[94945]: "Has a FileSystem" Feb 20 03:40:12 localhost silly_poincare[94945]: ], Feb 20 03:40:12 localhost silly_poincare[94945]: "sys_api": { Feb 20 03:40:12 localhost silly_poincare[94945]: "actuators": null, Feb 20 03:40:12 localhost silly_poincare[94945]: "device_nodes": "sr0", Feb 20 03:40:12 localhost silly_poincare[94945]: "human_readable_size": "482.00 KB", Feb 20 03:40:12 localhost silly_poincare[94945]: "id_bus": "ata", Feb 20 03:40:12 localhost silly_poincare[94945]: "model": "QEMU DVD-ROM", Feb 20 03:40:12 localhost silly_poincare[94945]: "nr_requests": "2", Feb 20 03:40:12 localhost silly_poincare[94945]: "partitions": {}, Feb 20 03:40:12 localhost silly_poincare[94945]: "path": "/dev/sr0", Feb 20 03:40:12 localhost silly_poincare[94945]: "removable": "1", Feb 20 03:40:12 localhost silly_poincare[94945]: "rev": "2.5+", Feb 20 03:40:12 localhost silly_poincare[94945]: "ro": "0", Feb 20 03:40:12 localhost silly_poincare[94945]: "rotational": "1", Feb 20 03:40:12 localhost silly_poincare[94945]: "sas_address": "", Feb 20 03:40:12 localhost silly_poincare[94945]: "sas_device_handle": "", Feb 20 03:40:12 localhost silly_poincare[94945]: "scheduler_mode": "mq-deadline", Feb 20 03:40:12 localhost silly_poincare[94945]: "sectors": 0, Feb 20 03:40:12 localhost silly_poincare[94945]: "sectorsize": "2048", Feb 20 03:40:12 localhost silly_poincare[94945]: "size": 493568.0, Feb 20 03:40:12 localhost silly_poincare[94945]: "support_discard": "0", Feb 20 03:40:12 localhost silly_poincare[94945]: "type": "disk", Feb 20 03:40:12 localhost silly_poincare[94945]: "vendor": "QEMU" Feb 20 03:40:12 localhost silly_poincare[94945]: } Feb 20 03:40:12 localhost silly_poincare[94945]: } Feb 20 03:40:12 localhost silly_poincare[94945]: ] Feb 20 03:40:12 localhost systemd[1]: libpod-8d6d8dda658277d0d71f7a330a7f272fa748eb6b00d9cd81bf795d9a94239612.scope: Deactivated successfully. Feb 20 03:40:12 localhost podman[94930]: 2026-02-20 08:40:12.152508861 +0000 UTC m=+1.104660918 container died 8d6d8dda658277d0d71f7a330a7f272fa748eb6b00d9cd81bf795d9a94239612 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_poincare, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., vcs-type=git, ceph=True, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:40:12 localhost systemd[1]: tmp-crun.ao5VsJ.mount: Deactivated successfully. Feb 20 03:40:12 localhost systemd[1]: var-lib-containers-storage-overlay-3a51d313d3d2a1f8c49202de997ff76106f4cdbf7f2c615fe3f3d530f7a40e90-merged.mount: Deactivated successfully. Feb 20 03:40:12 localhost podman[96723]: 2026-02-20 08:40:12.235194935 +0000 UTC m=+0.077667932 container remove 8d6d8dda658277d0d71f7a330a7f272fa748eb6b00d9cd81bf795d9a94239612 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_poincare, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, distribution-scope=public, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 03:40:12 localhost systemd[1]: libpod-conmon-8d6d8dda658277d0d71f7a330a7f272fa748eb6b00d9cd81bf795d9a94239612.scope: Deactivated successfully. Feb 20 03:40:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:40:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:40:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:40:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:40:26 localhost systemd[1]: tmp-crun.Y7sMKY.mount: Deactivated successfully. Feb 20 03:40:26 localhost podman[96753]: 2026-02-20 08:40:26.447277523 +0000 UTC m=+0.084311079 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, vcs-type=git, io.openshift.expose-services=) Feb 20 03:40:26 localhost podman[96755]: 2026-02-20 08:40:26.501622132 +0000 UTC m=+0.135802172 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, batch=17.1_20260112.1, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team) Feb 20 03:40:26 localhost podman[96753]: 2026-02-20 08:40:26.506903493 +0000 UTC m=+0.143937019 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible) Feb 20 03:40:26 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:40:26 localhost podman[96752]: 2026-02-20 08:40:26.555118858 +0000 UTC m=+0.191825085 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, vcs-type=git, version=17.1.13, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:40:26 localhost podman[96755]: 2026-02-20 08:40:26.584757757 +0000 UTC m=+0.218937827 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public) Feb 20 03:40:26 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:40:26 localhost podman[96754]: 2026-02-20 08:40:26.597902128 +0000 UTC m=+0.233797383 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container) Feb 20 03:40:26 localhost podman[96754]: 2026-02-20 08:40:26.609941599 +0000 UTC m=+0.245836854 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.13) Feb 20 03:40:26 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:40:26 localhost podman[96752]: 2026-02-20 08:40:26.752986272 +0000 UTC m=+0.389692489 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, url=https://www.redhat.com, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, vcs-type=git, version=17.1.13) Feb 20 03:40:26 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:40:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:40:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:40:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:40:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:40:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:40:29 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:40:29 localhost recover_tripleo_nova_virtqemud[96884]: 63703 Feb 20 03:40:29 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:40:29 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:40:29 localhost podman[96864]: 2026-02-20 08:40:29.484044423 +0000 UTC m=+0.105621906 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, container_name=nova_compute, architecture=x86_64, batch=17.1_20260112.1, config_id=tripleo_step5, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:40:29 localhost podman[96856]: 2026-02-20 08:40:29.45506462 +0000 UTC m=+0.083176058 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=collectd, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:40:29 localhost podman[96864]: 2026-02-20 08:40:29.512585994 +0000 UTC m=+0.134163477 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:40:29 localhost podman[96855]: 2026-02-20 08:40:29.512540332 +0000 UTC m=+0.145084758 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:40:29 localhost podman[96855]: 2026-02-20 08:40:29.548699176 +0000 UTC m=+0.181243672 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4) Feb 20 03:40:29 localhost podman[96855]: unhealthy Feb 20 03:40:29 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:40:29 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:40:29 localhost podman[96856]: 2026-02-20 08:40:29.585065745 +0000 UTC m=+0.213177133 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13) Feb 20 03:40:29 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:40:29 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:40:29 localhost podman[96853]: 2026-02-20 08:40:29.568218666 +0000 UTC m=+0.205183760 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, vcs-type=git, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, config_id=tripleo_step4) Feb 20 03:40:29 localhost podman[96854]: 2026-02-20 08:40:29.671416217 +0000 UTC m=+0.305249368 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_id=tripleo_step3, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, tcib_managed=true, distribution-scope=public, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z) Feb 20 03:40:29 localhost podman[96853]: 2026-02-20 08:40:29.70115592 +0000 UTC m=+0.338120944 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, version=17.1.13, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true) Feb 20 03:40:29 localhost podman[96853]: unhealthy Feb 20 03:40:29 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:40:29 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:40:29 localhost podman[96854]: 2026-02-20 08:40:29.754167824 +0000 UTC m=+0.388001035 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:40:29 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:40:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:40:33 localhost podman[96959]: 2026-02-20 08:40:33.437826097 +0000 UTC m=+0.077492977 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z) Feb 20 03:40:33 localhost podman[96959]: 2026-02-20 08:40:33.797055952 +0000 UTC m=+0.436722882 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1) Feb 20 03:40:33 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:40:56 localhost sshd[96982]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:40:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:40:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:40:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:40:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:40:57 localhost systemd[1]: tmp-crun.Z7cL1H.mount: Deactivated successfully. Feb 20 03:40:57 localhost podman[96983]: 2026-02-20 08:40:57.453589305 +0000 UTC m=+0.089455205 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, tcib_managed=true, version=17.1.13, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, architecture=x86_64, io.buildah.version=1.41.5) Feb 20 03:40:57 localhost systemd[1]: tmp-crun.Gl5Ytu.mount: Deactivated successfully. Feb 20 03:40:57 localhost podman[96986]: 2026-02-20 08:40:57.514639392 +0000 UTC m=+0.145513120 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Feb 20 03:40:57 localhost podman[96986]: 2026-02-20 08:40:57.55618768 +0000 UTC m=+0.187061428 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1766032510, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.13) Feb 20 03:40:57 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:40:57 localhost podman[96985]: 2026-02-20 08:40:57.555903902 +0000 UTC m=+0.189196645 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, version=17.1.13, container_name=ceilometer_agent_compute) Feb 20 03:40:57 localhost podman[96987]: 2026-02-20 08:40:57.618204723 +0000 UTC m=+0.243153693 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team) Feb 20 03:40:57 localhost podman[96985]: 2026-02-20 08:40:57.638939566 +0000 UTC m=+0.272232319 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container) Feb 20 03:40:57 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:40:57 localhost podman[96983]: 2026-02-20 08:40:57.667639171 +0000 UTC m=+0.303505071 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z) Feb 20 03:40:57 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:40:57 localhost podman[96987]: 2026-02-20 08:40:57.722335308 +0000 UTC m=+0.347284288 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, config_id=tripleo_step4, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Feb 20 03:40:57 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:40:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:40:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:40:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:40:59 localhost systemd[1]: tmp-crun.caPXPu.mount: Deactivated successfully. Feb 20 03:40:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:40:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:40:59 localhost podman[97090]: 2026-02-20 08:40:59.800411023 +0000 UTC m=+0.087796561 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:40:59 localhost podman[97130]: 2026-02-20 08:40:59.876341456 +0000 UTC m=+0.068007073 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 20 03:40:59 localhost podman[97089]: 2026-02-20 08:40:59.801684247 +0000 UTC m=+0.092525917 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, distribution-scope=public, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, container_name=collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:40:59 localhost podman[97130]: 2026-02-20 08:40:59.91172459 +0000 UTC m=+0.103390227 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 20 03:40:59 localhost podman[97130]: unhealthy Feb 20 03:40:59 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:40:59 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:40:59 localhost podman[97089]: 2026-02-20 08:40:59.936695676 +0000 UTC m=+0.227537266 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, version=17.1.13, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:40:59 localhost podman[97088]: 2026-02-20 08:40:59.846617015 +0000 UTC m=+0.139720186 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, container_name=ovn_controller, io.buildah.version=1.41.5, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z) Feb 20 03:40:59 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:40:59 localhost podman[97090]: 2026-02-20 08:40:59.953251207 +0000 UTC m=+0.240636715 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z) Feb 20 03:40:59 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:40:59 localhost podman[97132]: 2026-02-20 08:40:59.916143618 +0000 UTC m=+0.108922854 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, container_name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:40:59 localhost podman[97088]: 2026-02-20 08:40:59.978643184 +0000 UTC m=+0.271746385 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5) Feb 20 03:40:59 localhost podman[97088]: unhealthy Feb 20 03:40:59 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:40:59 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:40:59 localhost podman[97132]: 2026-02-20 08:40:59.998643047 +0000 UTC m=+0.191422263 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team) Feb 20 03:41:00 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:41:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:41:04 localhost systemd[1]: tmp-crun.tg81BI.mount: Deactivated successfully. Feb 20 03:41:04 localhost podman[97193]: 2026-02-20 08:41:04.441357225 +0000 UTC m=+0.080525727 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible) Feb 20 03:41:04 localhost podman[97193]: 2026-02-20 08:41:04.818785727 +0000 UTC m=+0.457954259 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:41:04 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:41:17 localhost sshd[97344]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:41:26 localhost sshd[97346]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:41:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:41:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:41:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:41:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:41:28 localhost podman[97350]: 2026-02-20 08:41:28.453052677 +0000 UTC m=+0.084296508 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=logrotate_crond, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, build-date=2026-01-12T22:10:15Z, architecture=x86_64, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:41:28 localhost podman[97350]: 2026-02-20 08:41:28.4629014 +0000 UTC m=+0.094145231 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, batch=17.1_20260112.1, config_id=tripleo_step4) Feb 20 03:41:28 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:41:28 localhost podman[97351]: 2026-02-20 08:41:28.515577794 +0000 UTC m=+0.145059538 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.buildah.version=1.41.5, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:41:28 localhost podman[97348]: 2026-02-20 08:41:28.565971748 +0000 UTC m=+0.197334312 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, release=1766032510, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 03:41:28 localhost podman[97351]: 2026-02-20 08:41:28.588537969 +0000 UTC m=+0.218019733 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 20 03:41:28 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:41:28 localhost podman[97349]: 2026-02-20 08:41:28.467162113 +0000 UTC m=+0.099015870 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com) Feb 20 03:41:28 localhost podman[97349]: 2026-02-20 08:41:28.658134714 +0000 UTC m=+0.289988481 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1) Feb 20 03:41:28 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:41:28 localhost podman[97348]: 2026-02-20 08:41:28.767976752 +0000 UTC m=+0.399339286 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 20 03:41:28 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:41:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:41:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:41:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:41:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:41:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:41:30 localhost podman[97449]: 2026-02-20 08:41:30.455554519 +0000 UTC m=+0.096104803 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:41:30 localhost podman[97451]: 2026-02-20 08:41:30.504534035 +0000 UTC m=+0.139314245 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, release=1766032510, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 20 03:41:30 localhost podman[97451]: 2026-02-20 08:41:30.519480093 +0000 UTC m=+0.154260343 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 20 03:41:30 localhost podman[97451]: unhealthy Feb 20 03:41:30 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:41:30 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:41:30 localhost podman[97450]: 2026-02-20 08:41:30.566606779 +0000 UTC m=+0.202864389 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, distribution-scope=public, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:41:30 localhost podman[97450]: 2026-02-20 08:41:30.573541793 +0000 UTC m=+0.209799373 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20260112.1, distribution-scope=public, architecture=x86_64) Feb 20 03:41:30 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:41:30 localhost podman[97449]: 2026-02-20 08:41:30.626241688 +0000 UTC m=+0.266792012 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:41:30 localhost podman[97458]: 2026-02-20 08:41:30.665358291 +0000 UTC m=+0.294031349 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, managed_by=tripleo_ansible, io.openshift.expose-services=) Feb 20 03:41:30 localhost podman[97452]: 2026-02-20 08:41:30.717735888 +0000 UTC m=+0.349769835 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.13, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z) Feb 20 03:41:30 localhost podman[97449]: unhealthy Feb 20 03:41:30 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:41:30 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:41:30 localhost podman[97458]: 2026-02-20 08:41:30.746595627 +0000 UTC m=+0.375268675 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:41:30 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:41:30 localhost podman[97452]: 2026-02-20 08:41:30.80408305 +0000 UTC m=+0.436116957 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, build-date=2026-01-12T22:10:15Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true) Feb 20 03:41:30 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:41:33 localhost sshd[97553]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:41:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:41:35 localhost podman[97555]: 2026-02-20 08:41:35.449199694 +0000 UTC m=+0.088474100 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:41:35 localhost podman[97555]: 2026-02-20 08:41:35.816229948 +0000 UTC m=+0.455504344 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, url=https://www.redhat.com, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z) Feb 20 03:41:35 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:41:52 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:41:52 localhost recover_tripleo_nova_virtqemud[97579]: 63703 Feb 20 03:41:52 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:41:52 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:41:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:41:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:41:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:41:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:41:59 localhost podman[97583]: 2026-02-20 08:41:59.437915672 +0000 UTC m=+0.076820969 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:41:59 localhost podman[97582]: 2026-02-20 08:41:59.494779588 +0000 UTC m=+0.135511113 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:41:59 localhost podman[97582]: 2026-02-20 08:41:59.527231873 +0000 UTC m=+0.167963388 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2026-01-12T22:10:15Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, version=17.1.13, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond) Feb 20 03:41:59 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:41:59 localhost podman[97580]: 2026-02-20 08:41:59.546849216 +0000 UTC m=+0.187910250 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, release=1766032510, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible) Feb 20 03:41:59 localhost podman[97581]: 2026-02-20 08:41:59.604967255 +0000 UTC m=+0.246057990 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:41:59 localhost podman[97583]: 2026-02-20 08:41:59.614815668 +0000 UTC m=+0.253720995 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, version=17.1.13, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 20 03:41:59 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:41:59 localhost podman[97581]: 2026-02-20 08:41:59.635909751 +0000 UTC m=+0.277000516 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, batch=17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z) Feb 20 03:41:59 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:41:59 localhost podman[97580]: 2026-02-20 08:41:59.7619492 +0000 UTC m=+0.403010174 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, distribution-scope=public, vcs-type=git) Feb 20 03:41:59 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:42:00 localhost systemd[1]: tmp-crun.S3GISF.mount: Deactivated successfully. Feb 20 03:42:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:42:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:42:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:42:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:42:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:42:01 localhost systemd[1]: tmp-crun.kBhwMn.mount: Deactivated successfully. Feb 20 03:42:01 localhost podman[97680]: 2026-02-20 08:42:01.459474589 +0000 UTC m=+0.094802477 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 20 03:42:01 localhost podman[97680]: 2026-02-20 08:42:01.501917591 +0000 UTC m=+0.137245529 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, version=17.1.13) Feb 20 03:42:01 localhost podman[97680]: unhealthy Feb 20 03:42:01 localhost podman[97682]: 2026-02-20 08:42:01.513464769 +0000 UTC m=+0.143684942 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com) Feb 20 03:42:01 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:42:01 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:42:01 localhost podman[97682]: 2026-02-20 08:42:01.526092135 +0000 UTC m=+0.156312378 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Feb 20 03:42:01 localhost podman[97682]: unhealthy Feb 20 03:42:01 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:42:01 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:42:01 localhost podman[97687]: 2026-02-20 08:42:01.563570035 +0000 UTC m=+0.189780150 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, release=1766032510, vendor=Red Hat, Inc.) Feb 20 03:42:01 localhost podman[97687]: 2026-02-20 08:42:01.610859235 +0000 UTC m=+0.237069350 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, release=1766032510, io.openshift.expose-services=, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, maintainer=OpenStack TripleO Team) Feb 20 03:42:01 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:42:01 localhost podman[97683]: 2026-02-20 08:42:01.616183426 +0000 UTC m=+0.242716760 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, version=17.1.13, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Feb 20 03:42:01 localhost podman[97681]: 2026-02-20 08:42:01.669984191 +0000 UTC m=+0.301196000 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git) Feb 20 03:42:01 localhost podman[97683]: 2026-02-20 08:42:01.698809009 +0000 UTC m=+0.325342383 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git) Feb 20 03:42:01 localhost podman[97681]: 2026-02-20 08:42:01.70784417 +0000 UTC m=+0.339056009 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, container_name=iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid) Feb 20 03:42:01 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:42:01 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:42:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:42:06 localhost podman[97788]: 2026-02-20 08:42:06.448052098 +0000 UTC m=+0.087978896 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, distribution-scope=public, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5) Feb 20 03:42:06 localhost podman[97788]: 2026-02-20 08:42:06.818260096 +0000 UTC m=+0.458186904 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, managed_by=tripleo_ansible, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com) Feb 20 03:42:06 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:42:08 localhost sshd[97811]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:42:20 localhost sshd[97891]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:42:26 localhost sshd[97893]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:42:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:42:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:42:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:42:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:42:30 localhost systemd[1]: tmp-crun.At9ukX.mount: Deactivated successfully. Feb 20 03:42:30 localhost podman[97895]: 2026-02-20 08:42:30.525986265 +0000 UTC m=+0.161056504 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 03:42:30 localhost podman[97896]: 2026-02-20 08:42:30.559113538 +0000 UTC m=+0.191339321 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, version=17.1.13, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.) Feb 20 03:42:30 localhost podman[97897]: 2026-02-20 08:42:30.486100782 +0000 UTC m=+0.117049871 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, version=17.1.13, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:42:30 localhost podman[97897]: 2026-02-20 08:42:30.619610081 +0000 UTC m=+0.250559100 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, io.buildah.version=1.41.5, version=17.1.13, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:42:30 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:42:30 localhost podman[97898]: 2026-02-20 08:42:30.667049536 +0000 UTC m=+0.290848324 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 20 03:42:30 localhost podman[97896]: 2026-02-20 08:42:30.69084191 +0000 UTC m=+0.323067703 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute) Feb 20 03:42:30 localhost podman[97898]: 2026-02-20 08:42:30.698750961 +0000 UTC m=+0.322549819 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, release=1766032510, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, tcib_managed=true, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com) Feb 20 03:42:30 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:42:30 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:42:30 localhost podman[97895]: 2026-02-20 08:42:30.754770544 +0000 UTC m=+0.389840833 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:42:30 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:42:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:42:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:42:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:42:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:42:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:42:32 localhost systemd[1]: tmp-crun.zuUYVF.mount: Deactivated successfully. Feb 20 03:42:32 localhost podman[98008]: 2026-02-20 08:42:32.470435508 +0000 UTC m=+0.090168585 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, version=17.1.13, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Feb 20 03:42:32 localhost podman[98008]: 2026-02-20 08:42:32.499799961 +0000 UTC m=+0.119533038 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, url=https://www.redhat.com, container_name=nova_compute, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Feb 20 03:42:32 localhost systemd[1]: tmp-crun.6vtA3z.mount: Deactivated successfully. Feb 20 03:42:32 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:42:32 localhost podman[97995]: 2026-02-20 08:42:32.523011469 +0000 UTC m=+0.158346361 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 20 03:42:32 localhost podman[97994]: 2026-02-20 08:42:32.450078065 +0000 UTC m=+0.090558994 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, distribution-scope=public, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 20 03:42:32 localhost systemd[1]: tmp-crun.VvUT7W.mount: Deactivated successfully. Feb 20 03:42:32 localhost podman[97996]: 2026-02-20 08:42:32.567936627 +0000 UTC m=+0.201595005 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, vendor=Red Hat, Inc., release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, version=17.1.13, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:42:32 localhost podman[97994]: 2026-02-20 08:42:32.597761342 +0000 UTC m=+0.238242261 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:42:32 localhost podman[97994]: unhealthy Feb 20 03:42:32 localhost podman[97995]: 2026-02-20 08:42:32.610873681 +0000 UTC m=+0.246208623 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, config_id=tripleo_step3, tcib_managed=true, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Feb 20 03:42:32 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:42:32 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:42:32 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:42:32 localhost podman[97996]: 2026-02-20 08:42:32.637728837 +0000 UTC m=+0.271387275 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=ovn_controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:42:32 localhost podman[97996]: unhealthy Feb 20 03:42:32 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:42:32 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:42:32 localhost podman[98002]: 2026-02-20 08:42:32.613767969 +0000 UTC m=+0.243835301 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, architecture=x86_64, config_id=tripleo_step3) Feb 20 03:42:32 localhost podman[98002]: 2026-02-20 08:42:32.69672128 +0000 UTC m=+0.326788642 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, vcs-type=git, distribution-scope=public, container_name=collectd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, io.openshift.expose-services=, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:42:32 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:42:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:42:37 localhost podman[98098]: 2026-02-20 08:42:37.439564698 +0000 UTC m=+0.078850943 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:42:37 localhost podman[98098]: 2026-02-20 08:42:37.797920651 +0000 UTC m=+0.437206856 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, architecture=x86_64, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Feb 20 03:42:37 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:42:43 localhost sshd[98122]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:43:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:43:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:43:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:43:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:43:01 localhost podman[98124]: 2026-02-20 08:43:01.439996811 +0000 UTC m=+0.079706216 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:43:01 localhost podman[98125]: 2026-02-20 08:43:01.493449226 +0000 UTC m=+0.129168335 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T23:07:47Z, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:43:01 localhost podman[98132]: 2026-02-20 08:43:01.565008733 +0000 UTC m=+0.193198261 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, release=1766032510, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vcs-type=git) Feb 20 03:43:01 localhost podman[98125]: 2026-02-20 08:43:01.574005293 +0000 UTC m=+0.209724372 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team) Feb 20 03:43:01 localhost podman[98126]: 2026-02-20 08:43:01.606169411 +0000 UTC m=+0.237132963 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Feb 20 03:43:01 localhost podman[98126]: 2026-02-20 08:43:01.617728959 +0000 UTC m=+0.248692491 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510) Feb 20 03:43:01 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:43:01 localhost podman[98124]: 2026-02-20 08:43:01.633676144 +0000 UTC m=+0.273385499 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:43:01 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:43:01 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:43:01 localhost podman[98132]: 2026-02-20 08:43:01.676859615 +0000 UTC m=+0.305049113 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 20 03:43:01 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:43:02 localhost systemd[1]: tmp-crun.NLANUf.mount: Deactivated successfully. Feb 20 03:43:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:43:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:43:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:43:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:43:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:43:03 localhost podman[98225]: 2026-02-20 08:43:03.454474191 +0000 UTC m=+0.091558942 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, release=1766032510, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, container_name=iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:43:03 localhost podman[98225]: 2026-02-20 08:43:03.466690627 +0000 UTC m=+0.103775358 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, batch=17.1_20260112.1, release=1766032510, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:43:03 localhost systemd[1]: tmp-crun.4Fh4jJ.mount: Deactivated successfully. Feb 20 03:43:03 localhost podman[98224]: 2026-02-20 08:43:03.514737647 +0000 UTC m=+0.155552797 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, container_name=ovn_metadata_agent, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:43:03 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:43:03 localhost podman[98226]: 2026-02-20 08:43:03.562909421 +0000 UTC m=+0.198482132 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, batch=17.1_20260112.1, architecture=x86_64, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:43:03 localhost podman[98227]: 2026-02-20 08:43:03.609506124 +0000 UTC m=+0.241573931 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, version=17.1.13, config_id=tripleo_step3, container_name=collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, release=1766032510) Feb 20 03:43:03 localhost podman[98227]: 2026-02-20 08:43:03.619598283 +0000 UTC m=+0.251666120 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, container_name=collectd, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible) Feb 20 03:43:03 localhost podman[98226]: 2026-02-20 08:43:03.627963386 +0000 UTC m=+0.263536087 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, release=1766032510, vcs-type=git, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4) Feb 20 03:43:03 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:43:03 localhost podman[98226]: unhealthy Feb 20 03:43:03 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:43:03 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:43:03 localhost podman[98224]: 2026-02-20 08:43:03.681957745 +0000 UTC m=+0.322772855 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, distribution-scope=public) Feb 20 03:43:03 localhost podman[98224]: unhealthy Feb 20 03:43:03 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:43:03 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:43:03 localhost podman[98233]: 2026-02-20 08:43:03.76619622 +0000 UTC m=+0.395729829 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.buildah.version=1.41.5, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true) Feb 20 03:43:03 localhost podman[98233]: 2026-02-20 08:43:03.8194601 +0000 UTC m=+0.448993709 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, release=1766032510, distribution-scope=public) Feb 20 03:43:03 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:43:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:43:08 localhost podman[98329]: 2026-02-20 08:43:08.47116471 +0000 UTC m=+0.110733573 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step4, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, vcs-type=git, architecture=x86_64, distribution-scope=public, tcib_managed=true) Feb 20 03:43:08 localhost podman[98329]: 2026-02-20 08:43:08.847765379 +0000 UTC m=+0.487334272 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, version=17.1.13, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:43:08 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:43:17 localhost podman[98453]: 2026-02-20 08:43:17.888516896 +0000 UTC m=+0.094505910 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1770267347, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, CEPH_POINT_RELEASE=) Feb 20 03:43:17 localhost podman[98453]: 2026-02-20 08:43:17.996134325 +0000 UTC m=+0.202123309 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, io.buildah.version=1.42.2, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, RELEASE=main, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, ceph=True) Feb 20 03:43:21 localhost sshd[98596]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:43:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:43:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:43:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:43:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:43:32 localhost podman[98601]: 2026-02-20 08:43:32.476115083 +0000 UTC m=+0.095593029 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 20 03:43:32 localhost systemd[1]: tmp-crun.m9IFan.mount: Deactivated successfully. Feb 20 03:43:32 localhost podman[98600]: 2026-02-20 08:43:32.525445598 +0000 UTC m=+0.150105452 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.openshift.expose-services=, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Feb 20 03:43:32 localhost podman[98600]: 2026-02-20 08:43:32.535652081 +0000 UTC m=+0.160311935 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, io.buildah.version=1.41.5, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T22:10:15Z, version=17.1.13, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20260112.1) Feb 20 03:43:32 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:43:32 localhost podman[98601]: 2026-02-20 08:43:32.610776522 +0000 UTC m=+0.230254488 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, batch=17.1_20260112.1, release=1766032510, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true) Feb 20 03:43:32 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:43:32 localhost podman[98598]: 2026-02-20 08:43:32.626776089 +0000 UTC m=+0.254142076 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vcs-type=git, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, config_id=tripleo_step1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com) Feb 20 03:43:32 localhost podman[98599]: 2026-02-20 08:43:32.665968234 +0000 UTC m=+0.293249128 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, vcs-type=git, config_id=tripleo_step4, version=17.1.13, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container) Feb 20 03:43:32 localhost podman[98599]: 2026-02-20 08:43:32.699825736 +0000 UTC m=+0.327106620 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1766032510, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, version=17.1.13) Feb 20 03:43:32 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:43:32 localhost podman[98598]: 2026-02-20 08:43:32.826876334 +0000 UTC m=+0.454242301 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, version=17.1.13, maintainer=OpenStack TripleO Team) Feb 20 03:43:32 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:43:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:43:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:43:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:43:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:43:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:43:34 localhost podman[98697]: 2026-02-20 08:43:34.453283158 +0000 UTC m=+0.087665058 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_id=tripleo_step3, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Feb 20 03:43:34 localhost podman[98697]: 2026-02-20 08:43:34.466577512 +0000 UTC m=+0.100959432 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container) Feb 20 03:43:34 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:43:34 localhost podman[98696]: 2026-02-20 08:43:34.510439892 +0000 UTC m=+0.148678275 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:43:34 localhost podman[98696]: 2026-02-20 08:43:34.549955015 +0000 UTC m=+0.188193428 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510) Feb 20 03:43:34 localhost podman[98696]: unhealthy Feb 20 03:43:34 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:43:34 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:43:34 localhost podman[98698]: 2026-02-20 08:43:34.556455078 +0000 UTC m=+0.189262056 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, architecture=x86_64, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.13, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 20 03:43:34 localhost podman[98705]: 2026-02-20 08:43:34.617972908 +0000 UTC m=+0.245339871 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, version=17.1.13, container_name=nova_compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., release=1766032510, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:43:34 localhost podman[98698]: 2026-02-20 08:43:34.637453497 +0000 UTC m=+0.270260535 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, release=1766032510, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git) Feb 20 03:43:34 localhost podman[98699]: 2026-02-20 08:43:34.675588373 +0000 UTC m=+0.305462143 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, release=1766032510) Feb 20 03:43:34 localhost podman[98699]: 2026-02-20 08:43:34.684609035 +0000 UTC m=+0.314482795 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_id=tripleo_step3, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 20 03:43:34 localhost podman[98705]: 2026-02-20 08:43:34.695217287 +0000 UTC m=+0.322584250 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510) Feb 20 03:43:34 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:43:34 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:43:34 localhost podman[98698]: unhealthy Feb 20 03:43:34 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:43:34 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:43:35 localhost sshd[98802]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:43:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:43:39 localhost podman[98804]: 2026-02-20 08:43:39.43408902 +0000 UTC m=+0.073851599 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, batch=17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, container_name=nova_migration_target, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team) Feb 20 03:43:39 localhost podman[98804]: 2026-02-20 08:43:39.781634025 +0000 UTC m=+0.421396594 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20260112.1, container_name=nova_migration_target) Feb 20 03:43:39 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:44:02 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:44:02 localhost recover_tripleo_nova_virtqemud[98828]: 63703 Feb 20 03:44:02 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:44:02 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:44:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:44:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:44:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:44:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:44:03 localhost podman[98830]: 2026-02-20 08:44:03.45060015 +0000 UTC m=+0.088303228 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 20 03:44:03 localhost podman[98829]: 2026-02-20 08:44:03.49767626 +0000 UTC m=+0.137911716 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vcs-type=git) Feb 20 03:44:03 localhost podman[98830]: 2026-02-20 08:44:03.508774212 +0000 UTC m=+0.146477310 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, container_name=ceilometer_agent_compute, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:44:03 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:44:03 localhost podman[98837]: 2026-02-20 08:44:03.5579254 +0000 UTC m=+0.187007232 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 20 03:44:03 localhost podman[98831]: 2026-02-20 08:44:03.604902987 +0000 UTC m=+0.237335255 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git) Feb 20 03:44:03 localhost podman[98837]: 2026-02-20 08:44:03.636951696 +0000 UTC m=+0.266033588 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, architecture=x86_64, io.openshift.expose-services=, release=1766032510, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc.) Feb 20 03:44:03 localhost podman[98831]: 2026-02-20 08:44:03.637168091 +0000 UTC m=+0.269600359 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, architecture=x86_64, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=logrotate_crond, url=https://www.redhat.com) Feb 20 03:44:03 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:44:03 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:44:03 localhost podman[98829]: 2026-02-20 08:44:03.705052154 +0000 UTC m=+0.345287610 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, release=1766032510, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:44:03 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:44:04 localhost systemd[1]: tmp-crun.P3Gvdf.mount: Deactivated successfully. Feb 20 03:44:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:44:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:44:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:44:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:44:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:44:05 localhost podman[98925]: 2026-02-20 08:44:05.440811361 +0000 UTC m=+0.076922240 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:44:05 localhost systemd[1]: tmp-crun.S7D3Um.mount: Deactivated successfully. Feb 20 03:44:05 localhost podman[98925]: 2026-02-20 08:44:05.460602491 +0000 UTC m=+0.096713380 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20260112.1, release=1766032510) Feb 20 03:44:05 localhost podman[98925]: unhealthy Feb 20 03:44:05 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:44:05 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:44:05 localhost podman[98931]: 2026-02-20 08:44:05.497173592 +0000 UTC m=+0.124477041 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, container_name=collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, architecture=x86_64, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team) Feb 20 03:44:05 localhost podman[98931]: 2026-02-20 08:44:05.5058695 +0000 UTC m=+0.133172959 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, build-date=2026-01-12T22:10:15Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, com.redhat.component=openstack-collectd-container, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Feb 20 03:44:05 localhost podman[98927]: 2026-02-20 08:44:05.462186486 +0000 UTC m=+0.090932567 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, release=1766032510, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Feb 20 03:44:05 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:44:05 localhost podman[98927]: 2026-02-20 08:44:05.54195616 +0000 UTC m=+0.170702231 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, release=1766032510, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., version=17.1.13, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 20 03:44:05 localhost podman[98927]: unhealthy Feb 20 03:44:05 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:44:05 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:44:05 localhost podman[98926]: 2026-02-20 08:44:05.546266717 +0000 UTC m=+0.178844856 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3) Feb 20 03:44:05 localhost podman[98934]: 2026-02-20 08:44:05.602887035 +0000 UTC m=+0.227718678 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, container_name=nova_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:44:05 localhost podman[98926]: 2026-02-20 08:44:05.628789054 +0000 UTC m=+0.261367263 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, version=17.1.13, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1) Feb 20 03:44:05 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:44:05 localhost podman[98934]: 2026-02-20 08:44:05.680459208 +0000 UTC m=+0.305290761 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, vendor=Red Hat, Inc., release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, batch=17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.13, distribution-scope=public, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:44:05 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:44:07 localhost sshd[99030]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:44:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:44:10 localhost podman[99032]: 2026-02-20 08:44:10.443561379 +0000 UTC m=+0.078682380 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, com.redhat.component=openstack-nova-compute-container, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:44:10 localhost podman[99032]: 2026-02-20 08:44:10.835853267 +0000 UTC m=+0.470974228 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, distribution-scope=public, container_name=nova_migration_target, config_id=tripleo_step4, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5) Feb 20 03:44:10 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:44:34 localhost systemd[1]: tmp-crun.2AKE7l.mount: Deactivated successfully. Feb 20 03:44:34 localhost podman[99133]: 2026-02-20 08:44:34.519301901 +0000 UTC m=+0.153503560 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, batch=17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Feb 20 03:44:34 localhost podman[99135]: 2026-02-20 08:44:34.559979035 +0000 UTC m=+0.191557085 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, container_name=logrotate_crond, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, release=1766032510, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:44:34 localhost podman[99135]: 2026-02-20 08:44:34.56811956 +0000 UTC m=+0.199697610 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64) Feb 20 03:44:34 localhost podman[99134]: 2026-02-20 08:44:34.477257735 +0000 UTC m=+0.110679237 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, release=1766032510, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 20 03:44:34 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:44:34 localhost podman[99134]: 2026-02-20 08:44:34.611776623 +0000 UTC m=+0.245198055 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:44:34 localhost podman[99136]: 2026-02-20 08:44:34.621725499 +0000 UTC m=+0.247647680 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:44:34 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:44:34 localhost podman[99136]: 2026-02-20 08:44:34.704835968 +0000 UTC m=+0.330758169 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, release=1766032510, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:44:34 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:44:34 localhost podman[99133]: 2026-02-20 08:44:34.771934304 +0000 UTC m=+0.406135973 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=) Feb 20 03:44:34 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:44:35 localhost systemd[1]: tmp-crun.YRYzL1.mount: Deactivated successfully. Feb 20 03:44:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:44:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:44:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:44:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:44:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:44:36 localhost podman[99231]: 2026-02-20 08:44:36.458156073 +0000 UTC m=+0.091455300 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, io.openshift.expose-services=, batch=17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 20 03:44:36 localhost podman[99233]: 2026-02-20 08:44:36.516018829 +0000 UTC m=+0.141647691 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, container_name=ovn_controller, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, version=17.1.13, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 20 03:44:36 localhost podman[99231]: 2026-02-20 08:44:36.547632137 +0000 UTC m=+0.180931344 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_id=tripleo_step4) Feb 20 03:44:36 localhost podman[99231]: unhealthy Feb 20 03:44:36 localhost podman[99234]: 2026-02-20 08:44:36.558230899 +0000 UTC m=+0.181686091 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, vcs-type=git, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=collectd, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, vendor=Red Hat, Inc.) Feb 20 03:44:36 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:44:36 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:44:36 localhost podman[99234]: 2026-02-20 08:44:36.566783023 +0000 UTC m=+0.190238235 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:44:36 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:44:36 localhost podman[99233]: 2026-02-20 08:44:36.601962852 +0000 UTC m=+0.227591654 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, vcs-type=git, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:44:36 localhost podman[99233]: unhealthy Feb 20 03:44:36 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:44:36 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:44:36 localhost podman[99241]: 2026-02-20 08:44:36.617141158 +0000 UTC m=+0.234686766 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step5, version=17.1.13, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.5, architecture=x86_64, release=1766032510) Feb 20 03:44:36 localhost podman[99232]: 2026-02-20 08:44:36.655314585 +0000 UTC m=+0.285191264 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.13, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, config_id=tripleo_step3, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z) Feb 20 03:44:36 localhost podman[99232]: 2026-02-20 08:44:36.662233383 +0000 UTC m=+0.292110052 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510) Feb 20 03:44:36 localhost podman[99241]: 2026-02-20 08:44:36.669683471 +0000 UTC m=+0.287229099 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, release=1766032510, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc.) Feb 20 03:44:36 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:44:36 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:44:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:44:41 localhost systemd[1]: tmp-crun.hcVKyn.mount: Deactivated successfully. Feb 20 03:44:41 localhost podman[99334]: 2026-02-20 08:44:41.439043845 +0000 UTC m=+0.078494045 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=nova_migration_target, distribution-scope=public, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510) Feb 20 03:44:41 localhost podman[99334]: 2026-02-20 08:44:41.801619236 +0000 UTC m=+0.441069416 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, container_name=nova_migration_target, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:44:41 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:44:43 localhost sshd[99357]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:44:53 localhost sshd[99359]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:44:54 localhost sshd[99361]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:45:00 localhost sshd[99363]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:45:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:45:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:45:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:45:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:45:05 localhost podman[99366]: 2026-02-20 08:45:05.449287787 +0000 UTC m=+0.081053333 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:45:05 localhost podman[99365]: 2026-02-20 08:45:05.499779245 +0000 UTC m=+0.133170539 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 03:45:05 localhost podman[99368]: 2026-02-20 08:45:05.561464707 +0000 UTC m=+0.187275458 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, distribution-scope=public, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, release=1766032510, config_id=tripleo_step4, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:45:05 localhost podman[99367]: 2026-02-20 08:45:05.608333023 +0000 UTC m=+0.236170141 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, version=17.1.13, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., distribution-scope=public, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container) Feb 20 03:45:05 localhost podman[99368]: 2026-02-20 08:45:05.617730486 +0000 UTC m=+0.243541207 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible) Feb 20 03:45:05 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:45:05 localhost podman[99366]: 2026-02-20 08:45:05.629459162 +0000 UTC m=+0.261224668 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z) Feb 20 03:45:05 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:45:05 localhost podman[99367]: 2026-02-20 08:45:05.643735747 +0000 UTC m=+0.271572895 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, distribution-scope=public, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:45:05 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:45:05 localhost podman[99365]: 2026-02-20 08:45:05.691766448 +0000 UTC m=+0.325157762 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64) Feb 20 03:45:05 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:45:06 localhost systemd[1]: tmp-crun.fry2AP.mount: Deactivated successfully. Feb 20 03:45:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:45:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:45:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:45:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:45:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:45:07 localhost podman[99467]: 2026-02-20 08:45:07.462238274 +0000 UTC m=+0.096192648 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, vcs-type=git) Feb 20 03:45:07 localhost podman[99467]: 2026-02-20 08:45:07.479718412 +0000 UTC m=+0.113672826 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, build-date=2026-01-12T22:56:19Z, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:45:07 localhost podman[99467]: unhealthy Feb 20 03:45:07 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:45:07 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:45:07 localhost podman[99469]: 2026-02-20 08:45:07.528522121 +0000 UTC m=+0.151278700 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, batch=17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1766032510, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true) Feb 20 03:45:07 localhost podman[99469]: 2026-02-20 08:45:07.569133573 +0000 UTC m=+0.191890142 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, architecture=x86_64, container_name=ovn_controller, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, release=1766032510, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=) Feb 20 03:45:07 localhost podman[99469]: unhealthy Feb 20 03:45:07 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:45:07 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:45:07 localhost podman[99470]: 2026-02-20 08:45:07.61517468 +0000 UTC m=+0.238138464 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, architecture=x86_64) Feb 20 03:45:07 localhost podman[99470]: 2026-02-20 08:45:07.626690772 +0000 UTC m=+0.249654586 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container) Feb 20 03:45:07 localhost podman[99476]: 2026-02-20 08:45:07.575150861 +0000 UTC m=+0.196331844 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, container_name=nova_compute, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:45:07 localhost podman[99468]: 2026-02-20 08:45:07.629009475 +0000 UTC m=+0.254027446 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, release=1766032510, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z) Feb 20 03:45:07 localhost podman[99468]: 2026-02-20 08:45:07.638821697 +0000 UTC m=+0.263839588 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, version=17.1.13, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Feb 20 03:45:07 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:45:07 localhost podman[99476]: 2026-02-20 08:45:07.658618217 +0000 UTC m=+0.279799120 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.13, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, config_id=tripleo_step5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public) Feb 20 03:45:07 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:45:07 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:45:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:45:12 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:45:12 localhost recover_tripleo_nova_virtqemud[99572]: 63703 Feb 20 03:45:12 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:45:12 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:45:12 localhost podman[99567]: 2026-02-20 08:45:12.435070903 +0000 UTC m=+0.078251669 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, version=17.1.13) Feb 20 03:45:12 localhost podman[99567]: 2026-02-20 08:45:12.80575786 +0000 UTC m=+0.448938606 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=nova_migration_target, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1766032510, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:45:12 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:45:25 localhost sshd[99669]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:45:35 localhost sshd[99671]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:45:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:45:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:45:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:45:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:45:36 localhost podman[99675]: 2026-02-20 08:45:36.179449491 +0000 UTC m=+0.079844356 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 20 03:45:36 localhost podman[99673]: 2026-02-20 08:45:36.214943018 +0000 UTC m=+0.117275787 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, release=1766032510, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:45:36 localhost podman[99675]: 2026-02-20 08:45:36.237971381 +0000 UTC m=+0.138366236 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.openshift.expose-services=, container_name=logrotate_crond, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:45:36 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:45:36 localhost podman[99676]: 2026-02-20 08:45:36.294479296 +0000 UTC m=+0.185571349 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, version=17.1.13, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team) Feb 20 03:45:36 localhost podman[99674]: 2026-02-20 08:45:36.315156086 +0000 UTC m=+0.217592407 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, version=17.1.13, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, io.openshift.expose-services=, container_name=ceilometer_agent_compute, release=1766032510) Feb 20 03:45:36 localhost podman[99676]: 2026-02-20 08:45:36.331854946 +0000 UTC m=+0.222946979 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Feb 20 03:45:36 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:45:36 localhost podman[99674]: 2026-02-20 08:45:36.345457325 +0000 UTC m=+0.247893646 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=ceilometer_agent_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 20 03:45:36 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:45:36 localhost podman[99673]: 2026-02-20 08:45:36.408851756 +0000 UTC m=+0.311184595 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20260112.1, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:45:36 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:45:37 localhost sshd[99775]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:45:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:45:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:45:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:45:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:45:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:45:38 localhost podman[99777]: 2026-02-20 08:45:38.460561964 +0000 UTC m=+0.095522063 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, tcib_managed=true, build-date=2026-01-12T22:56:19Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Feb 20 03:45:38 localhost podman[99777]: 2026-02-20 08:45:38.502554108 +0000 UTC m=+0.137514207 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1) Feb 20 03:45:38 localhost podman[99777]: unhealthy Feb 20 03:45:38 localhost podman[99778]: 2026-02-20 08:45:38.511774357 +0000 UTC m=+0.143487922 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-type=git, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:45:38 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:45:38 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:45:38 localhost podman[99778]: 2026-02-20 08:45:38.522946912 +0000 UTC m=+0.154660417 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, release=1766032510, container_name=iscsid, vcs-type=git, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:45:38 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:45:38 localhost podman[99779]: 2026-02-20 08:45:38.567841562 +0000 UTC m=+0.196302063 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, url=https://www.redhat.com, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:45:38 localhost podman[99779]: 2026-02-20 08:45:38.577422169 +0000 UTC m=+0.205882660 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, release=1766032510, container_name=ovn_controller, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5) Feb 20 03:45:38 localhost podman[99779]: unhealthy Feb 20 03:45:38 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:45:38 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:45:38 localhost podman[99780]: 2026-02-20 08:45:38.658158215 +0000 UTC m=+0.283207169 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:45:38 localhost podman[99780]: 2026-02-20 08:45:38.670588578 +0000 UTC m=+0.295637532 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:45:38 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:45:38 localhost podman[99786]: 2026-02-20 08:45:38.713890132 +0000 UTC m=+0.337132194 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, container_name=nova_compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=) Feb 20 03:45:38 localhost podman[99786]: 2026-02-20 08:45:38.736875844 +0000 UTC m=+0.360117946 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1) Feb 20 03:45:38 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:45:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:45:43 localhost systemd[1]: tmp-crun.Go7emJ.mount: Deactivated successfully. Feb 20 03:45:43 localhost podman[99883]: 2026-02-20 08:45:43.439233334 +0000 UTC m=+0.079585670 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510) Feb 20 03:45:43 localhost podman[99883]: 2026-02-20 08:45:43.815877675 +0000 UTC m=+0.456230011 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, io.openshift.expose-services=) Feb 20 03:45:43 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:46:01 localhost sshd[99906]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:46:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:46:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:46:06 localhost systemd[1]: tmp-crun.gLbLQI.mount: Deactivated successfully. Feb 20 03:46:06 localhost podman[99908]: 2026-02-20 08:46:06.450832646 +0000 UTC m=+0.088606584 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, container_name=logrotate_crond, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron) Feb 20 03:46:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:46:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:46:06 localhost podman[99909]: 2026-02-20 08:46:06.510560434 +0000 UTC m=+0.143418151 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, url=https://www.redhat.com) Feb 20 03:46:06 localhost podman[99938]: 2026-02-20 08:46:06.560136191 +0000 UTC m=+0.078889474 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, architecture=x86_64) Feb 20 03:46:06 localhost podman[99908]: 2026-02-20 08:46:06.585788955 +0000 UTC m=+0.223562933 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:46:06 localhost podman[99937]: 2026-02-20 08:46:06.61904589 +0000 UTC m=+0.137484596 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-type=git, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, version=17.1.13, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:46:06 localhost podman[99938]: 2026-02-20 08:46:06.643121107 +0000 UTC m=+0.161874370 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, architecture=x86_64, version=17.1.13, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:46:06 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:46:06 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:46:06 localhost podman[99909]: 2026-02-20 08:46:06.691700091 +0000 UTC m=+0.324557798 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:46:06 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:46:06 localhost podman[99937]: 2026-02-20 08:46:06.77256436 +0000 UTC m=+0.291003116 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, release=1766032510, container_name=metrics_qdr, io.openshift.expose-services=, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z) Feb 20 03:46:06 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:46:09 localhost podman[100010]: 2026-02-20 08:46:09.448940889 +0000 UTC m=+0.087383899 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:46:09 localhost podman[100023]: 2026-02-20 08:46:09.467442718 +0000 UTC m=+0.093650899 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, url=https://www.redhat.com) Feb 20 03:46:09 localhost podman[100023]: 2026-02-20 08:46:09.499034776 +0000 UTC m=+0.125242927 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, vendor=Red Hat, Inc., release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Feb 20 03:46:09 localhost podman[100010]: 2026-02-20 08:46:09.515250945 +0000 UTC m=+0.153693955 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid) Feb 20 03:46:09 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:46:09 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:46:09 localhost podman[100009]: 2026-02-20 08:46:09.502524626 +0000 UTC m=+0.143590595 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:46:09 localhost podman[100012]: 2026-02-20 08:46:09.610270925 +0000 UTC m=+0.243224410 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=collectd, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, release=1766032510, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:46:09 localhost podman[100012]: 2026-02-20 08:46:09.617153492 +0000 UTC m=+0.250107007 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_id=tripleo_step3, tcib_managed=true, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20260112.1) Feb 20 03:46:09 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:46:09 localhost podman[100009]: 2026-02-20 08:46:09.639372617 +0000 UTC m=+0.280438576 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, version=17.1.13, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 20 03:46:09 localhost podman[100009]: unhealthy Feb 20 03:46:09 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:46:09 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:46:09 localhost podman[100011]: 2026-02-20 08:46:09.661708155 +0000 UTC m=+0.294537487 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:46:09 localhost podman[100011]: 2026-02-20 08:46:09.677749539 +0000 UTC m=+0.310578851 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, container_name=ovn_controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64) Feb 20 03:46:09 localhost podman[100011]: unhealthy Feb 20 03:46:09 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:46:09 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:46:09 localhost sshd[100114]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:46:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:46:14 localhost podman[100116]: 2026-02-20 08:46:14.439163422 +0000 UTC m=+0.075384074 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=nova_migration_target, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:46:14 localhost podman[100116]: 2026-02-20 08:46:14.807692669 +0000 UTC m=+0.443913311 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, maintainer=OpenStack TripleO Team, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5) Feb 20 03:46:14 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:46:32 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:46:32 localhost recover_tripleo_nova_virtqemud[100217]: 63703 Feb 20 03:46:32 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:46:32 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:46:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:46:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:46:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:46:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:46:37 localhost systemd[1]: tmp-crun.kL61w0.mount: Deactivated successfully. Feb 20 03:46:37 localhost podman[100218]: 2026-02-20 08:46:37.41550926 +0000 UTC m=+0.060059597 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, version=17.1.13, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, tcib_managed=true, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container) Feb 20 03:46:37 localhost podman[100220]: 2026-02-20 08:46:37.471004472 +0000 UTC m=+0.112973820 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, config_id=tripleo_step4) Feb 20 03:46:37 localhost podman[100220]: 2026-02-20 08:46:37.478106553 +0000 UTC m=+0.120075901 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, architecture=x86_64, io.openshift.expose-services=, container_name=logrotate_crond, version=17.1.13, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5) Feb 20 03:46:37 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:46:37 localhost podman[100219]: 2026-02-20 08:46:37.52684464 +0000 UTC m=+0.169365320 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, build-date=2026-01-12T23:07:47Z, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=) Feb 20 03:46:37 localhost podman[100221]: 2026-02-20 08:46:37.571355372 +0000 UTC m=+0.213035213 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:46:37 localhost podman[100218]: 2026-02-20 08:46:37.592619956 +0000 UTC m=+0.237170323 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, url=https://www.redhat.com, container_name=metrics_qdr, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:46:37 localhost podman[100219]: 2026-02-20 08:46:37.602415048 +0000 UTC m=+0.244935728 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:46:37 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:46:37 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:46:37 localhost podman[100221]: 2026-02-20 08:46:37.648828464 +0000 UTC m=+0.290508285 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, batch=17.1_20260112.1, url=https://www.redhat.com) Feb 20 03:46:37 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:46:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:46:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:46:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:46:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:46:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:46:40 localhost podman[100323]: 2026-02-20 08:46:40.452634227 +0000 UTC m=+0.082652730 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_controller, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:46:40 localhost podman[100323]: 2026-02-20 08:46:40.466635496 +0000 UTC m=+0.096654029 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, architecture=x86_64) Feb 20 03:46:40 localhost podman[100323]: unhealthy Feb 20 03:46:40 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:46:40 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:46:40 localhost systemd[1]: tmp-crun.HyPW3W.mount: Deactivated successfully. Feb 20 03:46:40 localhost podman[100321]: 2026-02-20 08:46:40.514284369 +0000 UTC m=+0.150797729 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:46:40 localhost podman[100321]: 2026-02-20 08:46:40.556744114 +0000 UTC m=+0.193257434 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, build-date=2026-01-12T22:56:19Z, version=17.1.13, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20260112.1) Feb 20 03:46:40 localhost podman[100321]: unhealthy Feb 20 03:46:40 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:46:40 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:46:40 localhost podman[100322]: 2026-02-20 08:46:40.558852332 +0000 UTC m=+0.192334964 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1766032510, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git) Feb 20 03:46:40 localhost podman[100329]: 2026-02-20 08:46:40.619293065 +0000 UTC m=+0.245462500 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:46:40 localhost podman[100336]: 2026-02-20 08:46:40.664460813 +0000 UTC m=+0.287729312 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true) Feb 20 03:46:40 localhost podman[100329]: 2026-02-20 08:46:40.684241082 +0000 UTC m=+0.310410567 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=collectd, maintainer=OpenStack TripleO Team) Feb 20 03:46:40 localhost podman[100322]: 2026-02-20 08:46:40.694569416 +0000 UTC m=+0.328052008 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, release=1766032510, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, version=17.1.13, config_id=tripleo_step3, architecture=x86_64, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:46:40 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:46:40 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:46:40 localhost podman[100336]: 2026-02-20 08:46:40.751335327 +0000 UTC m=+0.374603856 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:46:40 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:46:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:46:45 localhost podman[100426]: 2026-02-20 08:46:45.424056724 +0000 UTC m=+0.066594685 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, container_name=nova_migration_target, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:46:45 localhost podman[100426]: 2026-02-20 08:46:45.824205559 +0000 UTC m=+0.466743480 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:46:45 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:46:58 localhost sshd[100448]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:47:00 localhost sshd[100450]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:47:04 localhost systemd[1]: session-28.scope: Deactivated successfully. Feb 20 03:47:04 localhost systemd[1]: session-28.scope: Consumed 7min 609ms CPU time. Feb 20 03:47:04 localhost systemd-logind[760]: Session 28 logged out. Waiting for processes to exit. Feb 20 03:47:04 localhost systemd-logind[760]: Removed session 28. Feb 20 03:47:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:47:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:47:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:47:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:47:08 localhost podman[100452]: 2026-02-20 08:47:08.458430432 +0000 UTC m=+0.094765045 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.5, architecture=x86_64, container_name=metrics_qdr, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:47:08 localhost systemd[1]: tmp-crun.tf4TXZ.mount: Deactivated successfully. Feb 20 03:47:08 localhost podman[100453]: 2026-02-20 08:47:08.521740111 +0000 UTC m=+0.155666620 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:47:08 localhost podman[100453]: 2026-02-20 08:47:08.556811878 +0000 UTC m=+0.190738387 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, distribution-scope=public, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T23:07:47Z) Feb 20 03:47:08 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:47:08 localhost podman[100454]: 2026-02-20 08:47:08.578290447 +0000 UTC m=+0.206259750 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510) Feb 20 03:47:08 localhost podman[100455]: 2026-02-20 08:47:08.548847797 +0000 UTC m=+0.173942875 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, version=17.1.13, maintainer=OpenStack TripleO Team) Feb 20 03:47:08 localhost podman[100454]: 2026-02-20 08:47:08.609910015 +0000 UTC m=+0.237879298 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron) Feb 20 03:47:08 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:47:08 localhost podman[100455]: 2026-02-20 08:47:08.626927292 +0000 UTC m=+0.252022390 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git) Feb 20 03:47:08 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:47:08 localhost podman[100452]: 2026-02-20 08:47:08.644683286 +0000 UTC m=+0.281017959 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:47:08 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:47:09 localhost systemd[1]: tmp-crun.AAWEZ9.mount: Deactivated successfully. Feb 20 03:47:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:47:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:47:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:47:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:47:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:47:11 localhost podman[100551]: 2026-02-20 08:47:11.452581832 +0000 UTC m=+0.088544754 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, container_name=iscsid, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:47:11 localhost systemd[1]: tmp-crun.q7pFHL.mount: Deactivated successfully. Feb 20 03:47:11 localhost podman[100552]: 2026-02-20 08:47:11.505895874 +0000 UTC m=+0.138978200 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ovn_controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, architecture=x86_64, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13) Feb 20 03:47:11 localhost podman[100551]: 2026-02-20 08:47:11.515375089 +0000 UTC m=+0.151337971 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, vcs-type=git, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, io.openshift.expose-services=, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:47:11 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:47:11 localhost podman[100550]: 2026-02-20 08:47:11.555444241 +0000 UTC m=+0.193229623 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:47:11 localhost podman[100552]: 2026-02-20 08:47:11.570212846 +0000 UTC m=+0.203295182 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, vcs-type=git, container_name=ovn_controller, io.buildah.version=1.41.5, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true) Feb 20 03:47:11 localhost podman[100553]: 2026-02-20 08:47:11.60336535 +0000 UTC m=+0.236505678 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Feb 20 03:47:11 localhost podman[100553]: 2026-02-20 08:47:11.637374283 +0000 UTC m=+0.270514611 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, build-date=2026-01-12T22:10:15Z, container_name=collectd, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com) Feb 20 03:47:11 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:47:11 localhost podman[100550]: 2026-02-20 08:47:11.67419529 +0000 UTC m=+0.311980662 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, managed_by=tripleo_ansible) Feb 20 03:47:11 localhost podman[100550]: unhealthy Feb 20 03:47:11 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:47:11 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:47:11 localhost podman[100559]: 2026-02-20 08:47:11.71688161 +0000 UTC m=+0.344165254 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:47:11 localhost podman[100552]: unhealthy Feb 20 03:47:11 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:47:11 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:47:11 localhost podman[100559]: 2026-02-20 08:47:11.745424739 +0000 UTC m=+0.372708393 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, managed_by=tripleo_ansible, container_name=nova_compute, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, config_id=tripleo_step5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z) Feb 20 03:47:11 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:47:12 localhost systemd[1]: tmp-crun.j5MQlM.mount: Deactivated successfully. Feb 20 03:47:13 localhost sshd[100654]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:47:14 localhost systemd[1]: Stopping User Manager for UID 1003... Feb 20 03:47:14 localhost systemd[36249]: Activating special unit Exit the Session... Feb 20 03:47:14 localhost systemd[36249]: Removed slice User Background Tasks Slice. Feb 20 03:47:14 localhost systemd[36249]: Stopped target Main User Target. Feb 20 03:47:14 localhost systemd[36249]: Stopped target Basic System. Feb 20 03:47:14 localhost systemd[36249]: Stopped target Paths. Feb 20 03:47:14 localhost systemd[36249]: Stopped target Sockets. Feb 20 03:47:14 localhost systemd[36249]: Stopped target Timers. Feb 20 03:47:14 localhost systemd[36249]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 20 03:47:14 localhost systemd[36249]: Stopped Daily Cleanup of User's Temporary Directories. Feb 20 03:47:14 localhost systemd[36249]: Closed D-Bus User Message Bus Socket. Feb 20 03:47:14 localhost systemd[36249]: Stopped Create User's Volatile Files and Directories. Feb 20 03:47:14 localhost systemd[36249]: Removed slice User Application Slice. Feb 20 03:47:14 localhost systemd[36249]: Reached target Shutdown. Feb 20 03:47:14 localhost systemd[36249]: Finished Exit the Session. Feb 20 03:47:14 localhost systemd[36249]: Reached target Exit the Session. Feb 20 03:47:14 localhost systemd[1]: user@1003.service: Deactivated successfully. Feb 20 03:47:14 localhost systemd[1]: Stopped User Manager for UID 1003. Feb 20 03:47:14 localhost systemd[1]: user@1003.service: Consumed 4.326s CPU time, read 0B from disk, written 7.0K to disk. Feb 20 03:47:14 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Feb 20 03:47:14 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Feb 20 03:47:14 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Feb 20 03:47:14 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Feb 20 03:47:14 localhost systemd[1]: Removed slice User Slice of UID 1003. Feb 20 03:47:14 localhost systemd[1]: user-1003.slice: Consumed 7min 4.956s CPU time. Feb 20 03:47:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:47:16 localhost podman[100657]: 2026-02-20 08:47:16.442185683 +0000 UTC m=+0.081865972 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vendor=Red Hat, Inc., container_name=nova_migration_target, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:47:16 localhost podman[100657]: 2026-02-20 08:47:16.81378566 +0000 UTC m=+0.453465999 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:47:16 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:47:31 localhost sshd[100757]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:47:33 localhost sshd[100759]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:47:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:47:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:47:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:47:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:47:39 localhost podman[100762]: 2026-02-20 08:47:39.458070014 +0000 UTC m=+0.090797374 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, release=1766032510, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:47:39 localhost podman[100762]: 2026-02-20 08:47:39.489787135 +0000 UTC m=+0.122514565 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13) Feb 20 03:47:39 localhost podman[100761]: 2026-02-20 08:47:39.505308728 +0000 UTC m=+0.140035964 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, architecture=x86_64, config_id=tripleo_step1, release=1766032510, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:47:39 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:47:39 localhost podman[100764]: 2026-02-20 08:47:39.557094515 +0000 UTC m=+0.183489471 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, release=1766032510, version=17.1.13, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 20 03:47:39 localhost podman[100764]: 2026-02-20 08:47:39.60877587 +0000 UTC m=+0.235170796 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:47:39 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:47:39 localhost podman[100763]: 2026-02-20 08:47:39.61449041 +0000 UTC m=+0.244224532 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, release=1766032510, distribution-scope=public, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-cron, tcib_managed=true, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, version=17.1.13, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:47:39 localhost podman[100761]: 2026-02-20 08:47:39.681334029 +0000 UTC m=+0.316061255 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 03:47:39 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:47:39 localhost podman[100763]: 2026-02-20 08:47:39.69765774 +0000 UTC m=+0.327391792 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron) Feb 20 03:47:39 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:47:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:47:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:47:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:47:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:47:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:47:42 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:47:42 localhost recover_tripleo_nova_virtqemud[100897]: 63703 Feb 20 03:47:42 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:47:42 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:47:42 localhost podman[100864]: 2026-02-20 08:47:42.463224316 +0000 UTC m=+0.102677156 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:47:42 localhost podman[100864]: 2026-02-20 08:47:42.47179699 +0000 UTC m=+0.111249910 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, version=17.1.13, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.expose-services=, tcib_managed=true) Feb 20 03:47:42 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:47:42 localhost podman[100863]: 2026-02-20 08:47:42.522949283 +0000 UTC m=+0.162797091 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20260112.1, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5) Feb 20 03:47:42 localhost podman[100865]: 2026-02-20 08:47:42.601214292 +0000 UTC m=+0.236946237 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, url=https://www.redhat.com, release=1766032510, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, version=17.1.13, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Feb 20 03:47:42 localhost podman[100867]: 2026-02-20 08:47:42.55757344 +0000 UTC m=+0.189563179 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, distribution-scope=public, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64) Feb 20 03:47:42 localhost podman[100865]: 2026-02-20 08:47:42.613787817 +0000 UTC m=+0.249519772 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.5, config_id=tripleo_step4, version=17.1.13, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, release=1766032510) Feb 20 03:47:42 localhost podman[100865]: unhealthy Feb 20 03:47:42 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:47:42 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:47:42 localhost podman[100866]: 2026-02-20 08:47:42.584267467 +0000 UTC m=+0.219046350 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.13, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Feb 20 03:47:42 localhost podman[100867]: 2026-02-20 08:47:42.642777757 +0000 UTC m=+0.274767466 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:47:42 localhost podman[100863]: 2026-02-20 08:47:42.655865354 +0000 UTC m=+0.295713152 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:47:42 localhost podman[100863]: unhealthy Feb 20 03:47:42 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:47:42 localhost podman[100866]: 2026-02-20 08:47:42.667781006 +0000 UTC m=+0.302559929 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp-rhel9/openstack-collectd, vcs-type=git, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:47:42 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:47:42 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:47:42 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:47:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:47:47 localhost podman[100972]: 2026-02-20 08:47:47.449333746 +0000 UTC m=+0.083191102 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, tcib_managed=true) Feb 20 03:47:47 localhost podman[100972]: 2026-02-20 08:47:47.820980693 +0000 UTC m=+0.454838009 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, release=1766032510, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:47:47 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:48:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:48:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:48:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:48:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:48:10 localhost podman[100996]: 2026-02-20 08:48:10.455941324 +0000 UTC m=+0.087133752 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1) Feb 20 03:48:10 localhost podman[100995]: 2026-02-20 08:48:10.502561354 +0000 UTC m=+0.135246595 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, build-date=2026-01-12T22:10:14Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Feb 20 03:48:10 localhost podman[100999]: 2026-02-20 08:48:10.550137625 +0000 UTC m=+0.174422455 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20260112.1, version=17.1.13, distribution-scope=public, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 20 03:48:10 localhost podman[100999]: 2026-02-20 08:48:10.569717581 +0000 UTC m=+0.194002391 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, vcs-type=git, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:48:10 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:48:10 localhost podman[100997]: 2026-02-20 08:48:10.607790805 +0000 UTC m=+0.235920123 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20260112.1, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4) Feb 20 03:48:10 localhost podman[100996]: 2026-02-20 08:48:10.632925107 +0000 UTC m=+0.264117585 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4) Feb 20 03:48:10 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:48:10 localhost podman[100997]: 2026-02-20 08:48:10.648671255 +0000 UTC m=+0.276800553 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1766032510, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, config_id=tripleo_step4) Feb 20 03:48:10 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:48:10 localhost podman[100995]: 2026-02-20 08:48:10.711224217 +0000 UTC m=+0.343909438 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.13, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 20 03:48:10 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:48:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:48:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:48:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:48:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:48:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:48:13 localhost podman[101089]: 2026-02-20 08:48:13.448746925 +0000 UTC m=+0.088302989 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, architecture=x86_64, container_name=ovn_metadata_agent, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, url=https://www.redhat.com) Feb 20 03:48:13 localhost podman[101089]: 2026-02-20 08:48:13.503793395 +0000 UTC m=+0.143349419 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, batch=17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:48:13 localhost systemd[1]: tmp-crun.rvqxDe.mount: Deactivated successfully. Feb 20 03:48:13 localhost podman[101089]: unhealthy Feb 20 03:48:13 localhost podman[101092]: 2026-02-20 08:48:13.512212517 +0000 UTC m=+0.143293418 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, vcs-type=git, container_name=collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510) Feb 20 03:48:13 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:48:13 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:48:13 localhost podman[101091]: 2026-02-20 08:48:13.559009431 +0000 UTC m=+0.191408702 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=ovn_controller, vendor=Red Hat, Inc.) Feb 20 03:48:13 localhost podman[101091]: 2026-02-20 08:48:13.601866185 +0000 UTC m=+0.234265416 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 20 03:48:13 localhost podman[101091]: unhealthy Feb 20 03:48:13 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:48:13 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:48:13 localhost podman[101092]: 2026-02-20 08:48:13.627730492 +0000 UTC m=+0.258811403 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, container_name=collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:48:13 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:48:13 localhost podman[101090]: 2026-02-20 08:48:13.605347894 +0000 UTC m=+0.243297221 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 20 03:48:13 localhost podman[101090]: 2026-02-20 08:48:13.690911319 +0000 UTC m=+0.328860616 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.5, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 20 03:48:13 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:48:13 localhost podman[101103]: 2026-02-20 08:48:13.767280985 +0000 UTC m=+0.392766619 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.expose-services=, version=17.1.13, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=nova_compute) Feb 20 03:48:13 localhost podman[101103]: 2026-02-20 08:48:13.797802109 +0000 UTC m=+0.423287703 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com) Feb 20 03:48:13 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:48:16 localhost sshd[101190]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:48:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:48:18 localhost podman[101192]: 2026-02-20 08:48:18.442716273 +0000 UTC m=+0.081860292 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:48:18 localhost podman[101192]: 2026-02-20 08:48:18.813697026 +0000 UTC m=+0.452841005 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, release=1766032510, container_name=nova_migration_target, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, tcib_managed=true) Feb 20 03:48:18 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:48:25 localhost sshd[101216]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:48:32 localhost sshd[101294]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:48:39 localhost sshd[101296]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:48:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:48:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:48:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:48:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:48:41 localhost systemd[1]: tmp-crun.0RoAFQ.mount: Deactivated successfully. Feb 20 03:48:41 localhost podman[101301]: 2026-02-20 08:48:41.464769633 +0000 UTC m=+0.088120267 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible) Feb 20 03:48:41 localhost podman[101299]: 2026-02-20 08:48:41.503357865 +0000 UTC m=+0.132675539 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:48:41 localhost podman[101299]: 2026-02-20 08:48:41.53083071 +0000 UTC m=+0.160148374 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute) Feb 20 03:48:41 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:48:41 localhost podman[101298]: 2026-02-20 08:48:41.439041285 +0000 UTC m=+0.070787244 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, container_name=metrics_qdr, release=1766032510) Feb 20 03:48:41 localhost podman[101301]: 2026-02-20 08:48:41.550774544 +0000 UTC m=+0.174125138 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, version=17.1.13, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 20 03:48:41 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:48:41 localhost podman[101300]: 2026-02-20 08:48:41.517803652 +0000 UTC m=+0.143046277 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, container_name=logrotate_crond, io.buildah.version=1.41.5, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container) Feb 20 03:48:41 localhost podman[101300]: 2026-02-20 08:48:41.601646434 +0000 UTC m=+0.226889029 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, distribution-scope=public, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, build-date=2026-01-12T22:10:15Z) Feb 20 03:48:41 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:48:41 localhost podman[101298]: 2026-02-20 08:48:41.633691712 +0000 UTC m=+0.265437681 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, distribution-scope=public, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1) Feb 20 03:48:41 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:48:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:48:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:48:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:48:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:48:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:48:44 localhost systemd[1]: tmp-crun.9KYmqF.mount: Deactivated successfully. Feb 20 03:48:44 localhost podman[101394]: 2026-02-20 08:48:44.459315986 +0000 UTC m=+0.098962537 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.13, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:48:44 localhost podman[101394]: 2026-02-20 08:48:44.502649276 +0000 UTC m=+0.142295777 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:48:44 localhost podman[101394]: unhealthy Feb 20 03:48:44 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:48:44 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:48:44 localhost podman[101396]: 2026-02-20 08:48:44.50356576 +0000 UTC m=+0.137451757 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, tcib_managed=true, container_name=ovn_controller) Feb 20 03:48:44 localhost podman[101395]: 2026-02-20 08:48:44.559865566 +0000 UTC m=+0.196027354 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, architecture=x86_64, vcs-type=git, container_name=iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, version=17.1.13, distribution-scope=public, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:48:44 localhost podman[101403]: 2026-02-20 08:48:44.478191991 +0000 UTC m=+0.102252845 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, container_name=nova_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:48:44 localhost podman[101396]: 2026-02-20 08:48:44.582727807 +0000 UTC m=+0.216613784 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, container_name=ovn_controller, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:48:44 localhost podman[101396]: unhealthy Feb 20 03:48:44 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:48:44 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:48:44 localhost podman[101395]: 2026-02-20 08:48:44.602823304 +0000 UTC m=+0.238985092 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, release=1766032510, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid) Feb 20 03:48:44 localhost podman[101403]: 2026-02-20 08:48:44.61420232 +0000 UTC m=+0.238263134 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, architecture=x86_64, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1766032510, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:48:44 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:48:44 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:48:44 localhost podman[101397]: 2026-02-20 08:48:44.668956014 +0000 UTC m=+0.299238195 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, version=17.1.13) Feb 20 03:48:44 localhost podman[101397]: 2026-02-20 08:48:44.70434156 +0000 UTC m=+0.334623751 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com) Feb 20 03:48:44 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:48:47 localhost sshd[101497]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:48:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:48:49 localhost podman[101499]: 2026-02-20 08:48:49.43468926 +0000 UTC m=+0.077094113 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1766032510, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64) Feb 20 03:48:49 localhost podman[101499]: 2026-02-20 08:48:49.833853086 +0000 UTC m=+0.476257929 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, container_name=nova_migration_target, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 20 03:48:49 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:48:54 localhost sshd[101522]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:49:05 localhost sshd[101524]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:49:09 localhost sshd[101526]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:49:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:49:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:49:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:49:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:49:12 localhost systemd[1]: tmp-crun.cClGZY.mount: Deactivated successfully. Feb 20 03:49:12 localhost podman[101531]: 2026-02-20 08:49:12.448530083 +0000 UTC m=+0.078665765 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, version=17.1.13, managed_by=tripleo_ansible, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z) Feb 20 03:49:12 localhost podman[101531]: 2026-02-20 08:49:12.501951513 +0000 UTC m=+0.132087165 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.5, architecture=x86_64, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:49:12 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:49:12 localhost podman[101529]: 2026-02-20 08:49:12.506791552 +0000 UTC m=+0.140620612 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=ceilometer_agent_compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510) Feb 20 03:49:12 localhost podman[101529]: 2026-02-20 08:49:12.589805092 +0000 UTC m=+0.223634132 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true) Feb 20 03:49:12 localhost podman[101528]: 2026-02-20 08:49:12.559246555 +0000 UTC m=+0.193388583 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.buildah.version=1.41.5, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, version=17.1.13, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:49:12 localhost podman[101530]: 2026-02-20 08:49:12.615327365 +0000 UTC m=+0.246911005 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:49:12 localhost podman[101530]: 2026-02-20 08:49:12.62451059 +0000 UTC m=+0.256094270 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, architecture=x86_64) Feb 20 03:49:12 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:49:12 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:49:12 localhost podman[101528]: 2026-02-20 08:49:12.779940678 +0000 UTC m=+0.414087726 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, version=17.1.13, container_name=metrics_qdr, vcs-type=git) Feb 20 03:49:12 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:49:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:49:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:49:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:49:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:49:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:49:15 localhost systemd[1]: tmp-crun.FQgp71.mount: Deactivated successfully. Feb 20 03:49:15 localhost podman[101631]: 2026-02-20 08:49:15.454761389 +0000 UTC m=+0.095095245 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, tcib_managed=true, url=https://www.redhat.com, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5) Feb 20 03:49:15 localhost podman[101633]: 2026-02-20 08:49:15.51017982 +0000 UTC m=+0.144170626 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible) Feb 20 03:49:15 localhost podman[101641]: 2026-02-20 08:49:15.562946132 +0000 UTC m=+0.191747609 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.5, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=) Feb 20 03:49:15 localhost podman[101633]: 2026-02-20 08:49:15.593782927 +0000 UTC m=+0.227773753 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13) Feb 20 03:49:15 localhost podman[101633]: unhealthy Feb 20 03:49:15 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:49:15 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:49:15 localhost podman[101634]: 2026-02-20 08:49:15.61219973 +0000 UTC m=+0.242918529 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=collectd, release=1766032510, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.5, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:49:15 localhost podman[101634]: 2026-02-20 08:49:15.621420596 +0000 UTC m=+0.252139385 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, release=1766032510, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z) Feb 20 03:49:15 localhost podman[101632]: 2026-02-20 08:49:15.480494367 +0000 UTC m=+0.116842146 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Feb 20 03:49:15 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:49:15 localhost podman[101631]: 2026-02-20 08:49:15.639712876 +0000 UTC m=+0.280046782 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, architecture=x86_64) Feb 20 03:49:15 localhost podman[101631]: unhealthy Feb 20 03:49:15 localhost podman[101641]: 2026-02-20 08:49:15.649750504 +0000 UTC m=+0.278551941 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1) Feb 20 03:49:15 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:49:15 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:49:15 localhost podman[101632]: 2026-02-20 08:49:15.664754895 +0000 UTC m=+0.301102624 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Feb 20 03:49:15 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:49:15 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:49:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:49:20 localhost systemd[1]: tmp-crun.aDL3Bt.mount: Deactivated successfully. Feb 20 03:49:20 localhost podman[101736]: 2026-02-20 08:49:20.442258346 +0000 UTC m=+0.082365673 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, release=1766032510, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, version=17.1.13, managed_by=tripleo_ansible, io.buildah.version=1.41.5) Feb 20 03:49:20 localhost podman[101736]: 2026-02-20 08:49:20.812867119 +0000 UTC m=+0.452974446 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, distribution-scope=public, release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, tcib_managed=true, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=nova_migration_target) Feb 20 03:49:20 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:49:26 localhost sshd[101759]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:49:28 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:49:28 localhost recover_tripleo_nova_virtqemud[101777]: 63703 Feb 20 03:49:28 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:49:28 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:49:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:49:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:49:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:49:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:49:43 localhost podman[101839]: 2026-02-20 08:49:43.453853892 +0000 UTC m=+0.092265769 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 03:49:43 localhost podman[101840]: 2026-02-20 08:49:43.501184408 +0000 UTC m=+0.139495642 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T23:07:47Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.openshift.expose-services=, release=1766032510) Feb 20 03:49:43 localhost podman[101840]: 2026-02-20 08:49:43.555214684 +0000 UTC m=+0.193525888 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team) Feb 20 03:49:43 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:49:43 localhost podman[101842]: 2026-02-20 08:49:43.605301113 +0000 UTC m=+0.237876514 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi) Feb 20 03:49:43 localhost podman[101842]: 2026-02-20 08:49:43.658694541 +0000 UTC m=+0.291269932 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, url=https://www.redhat.com) Feb 20 03:49:43 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:49:43 localhost podman[101839]: 2026-02-20 08:49:43.674837253 +0000 UTC m=+0.313249080 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.buildah.version=1.41.5) Feb 20 03:49:43 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:49:43 localhost podman[101841]: 2026-02-20 08:49:43.556771465 +0000 UTC m=+0.191514913 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 20 03:49:43 localhost podman[101841]: 2026-02-20 08:49:43.74203396 +0000 UTC m=+0.376777358 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.5, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc.) Feb 20 03:49:43 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:49:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:49:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:49:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:49:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:49:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:49:46 localhost podman[101945]: 2026-02-20 08:49:46.460982861 +0000 UTC m=+0.092620438 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, distribution-scope=public, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:49:46 localhost podman[101945]: 2026-02-20 08:49:46.47214939 +0000 UTC m=+0.103786957 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 20 03:49:46 localhost podman[101943]: 2026-02-20 08:49:46.508544334 +0000 UTC m=+0.146326245 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, tcib_managed=true, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team) Feb 20 03:49:46 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:49:46 localhost podman[101954]: 2026-02-20 08:49:46.570800579 +0000 UTC m=+0.198174952 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, url=https://www.redhat.com, version=17.1.13, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:49:46 localhost podman[101943]: 2026-02-20 08:49:46.599940118 +0000 UTC m=+0.237722009 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, release=1766032510, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1) Feb 20 03:49:46 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:49:46 localhost podman[101942]: 2026-02-20 08:49:46.615769721 +0000 UTC m=+0.256155222 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, architecture=x86_64, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 20 03:49:46 localhost podman[101954]: 2026-02-20 08:49:46.624753421 +0000 UTC m=+0.252127814 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, url=https://www.redhat.com, version=17.1.13, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, architecture=x86_64, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20260112.1) Feb 20 03:49:46 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:49:46 localhost podman[101942]: 2026-02-20 08:49:46.656801919 +0000 UTC m=+0.297187360 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, vcs-type=git, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 20 03:49:46 localhost podman[101942]: unhealthy Feb 20 03:49:46 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:49:46 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:49:46 localhost podman[101944]: 2026-02-20 08:49:46.709323513 +0000 UTC m=+0.343738504 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510) Feb 20 03:49:46 localhost podman[101944]: 2026-02-20 08:49:46.728671351 +0000 UTC m=+0.363086262 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Feb 20 03:49:46 localhost podman[101944]: unhealthy Feb 20 03:49:46 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:49:46 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:49:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 03:49:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5073 writes, 22K keys, 5073 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5073 writes, 653 syncs, 7.77 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 03:49:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:49:51 localhost podman[102047]: 2026-02-20 08:49:51.441309917 +0000 UTC m=+0.081385128 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, config_id=tripleo_step4, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:49:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 03:49:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5513 writes, 24K keys, 5513 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5513 writes, 750 syncs, 7.35 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 03:49:51 localhost podman[102047]: 2026-02-20 08:49:51.79699168 +0000 UTC m=+0.437066881 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Feb 20 03:49:51 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:50:12 localhost sshd[102070]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:50:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:50:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:50:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:50:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:50:14 localhost podman[102072]: 2026-02-20 08:50:14.453678525 +0000 UTC m=+0.091316112 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5) Feb 20 03:50:14 localhost podman[102074]: 2026-02-20 08:50:14.499956884 +0000 UTC m=+0.132650020 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, config_id=tripleo_step4) Feb 20 03:50:14 localhost podman[102074]: 2026-02-20 08:50:14.538819243 +0000 UTC m=+0.171512339 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, architecture=x86_64, version=17.1.13, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20260112.1) Feb 20 03:50:14 localhost systemd[1]: tmp-crun.eGaGte.mount: Deactivated successfully. Feb 20 03:50:14 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:50:14 localhost podman[102073]: 2026-02-20 08:50:14.567513581 +0000 UTC m=+0.203536526 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510) Feb 20 03:50:14 localhost podman[102073]: 2026-02-20 08:50:14.600881703 +0000 UTC m=+0.236904648 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, io.openshift.expose-services=, release=1766032510, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git) Feb 20 03:50:14 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:50:14 localhost podman[102075]: 2026-02-20 08:50:14.610622063 +0000 UTC m=+0.240046111 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, architecture=x86_64, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com) Feb 20 03:50:14 localhost podman[102075]: 2026-02-20 08:50:14.637814681 +0000 UTC m=+0.267238709 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible) Feb 20 03:50:14 localhost podman[102072]: 2026-02-20 08:50:14.64900757 +0000 UTC m=+0.286645157 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, release=1766032510) Feb 20 03:50:14 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:50:14 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:50:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:50:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:50:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:50:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:50:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:50:17 localhost podman[102174]: 2026-02-20 08:50:17.445620078 +0000 UTC m=+0.078150751 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1766032510, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, container_name=iscsid, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible) Feb 20 03:50:17 localhost systemd[1]: tmp-crun.vFnWcA.mount: Deactivated successfully. Feb 20 03:50:17 localhost podman[102175]: 2026-02-20 08:50:17.468329955 +0000 UTC m=+0.091079156 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13) Feb 20 03:50:17 localhost podman[102175]: 2026-02-20 08:50:17.502171711 +0000 UTC m=+0.124920892 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Feb 20 03:50:17 localhost podman[102175]: unhealthy Feb 20 03:50:17 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:50:17 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:50:17 localhost podman[102181]: 2026-02-20 08:50:17.5144699 +0000 UTC m=+0.133863021 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, container_name=collectd, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:50:17 localhost podman[102174]: 2026-02-20 08:50:17.545661244 +0000 UTC m=+0.178191927 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, container_name=iscsid, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Feb 20 03:50:17 localhost podman[102181]: 2026-02-20 08:50:17.552857537 +0000 UTC m=+0.172250658 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Feb 20 03:50:17 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:50:17 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:50:17 localhost podman[102173]: 2026-02-20 08:50:17.554884371 +0000 UTC m=+0.184927338 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, batch=17.1_20260112.1, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:50:17 localhost podman[102186]: 2026-02-20 08:50:17.614528437 +0000 UTC m=+0.229870691 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, config_id=tripleo_step5, vcs-type=git, container_name=nova_compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:50:17 localhost podman[102173]: 2026-02-20 08:50:17.636653517 +0000 UTC m=+0.266696494 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=) Feb 20 03:50:17 localhost podman[102173]: unhealthy Feb 20 03:50:17 localhost podman[102186]: 2026-02-20 08:50:17.646738268 +0000 UTC m=+0.262080502 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, build-date=2026-01-12T23:32:04Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:50:17 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:50:17 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:50:17 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:50:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:50:22 localhost podman[102274]: 2026-02-20 08:50:22.44043043 +0000 UTC m=+0.078797548 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, release=1766032510, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.13, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:50:22 localhost podman[102274]: 2026-02-20 08:50:22.805198456 +0000 UTC m=+0.443565584 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.13, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64) Feb 20 03:50:22 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:50:34 localhost sshd[102374]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:50:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:50:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:50:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:50:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:50:45 localhost podman[102376]: 2026-02-20 08:50:45.4521269 +0000 UTC m=+0.091358525 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, version=17.1.13, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:50:45 localhost podman[102378]: 2026-02-20 08:50:45.494610206 +0000 UTC m=+0.128658812 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, version=17.1.13, vcs-type=git) Feb 20 03:50:45 localhost podman[102378]: 2026-02-20 08:50:45.533839445 +0000 UTC m=+0.167888081 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:50:45 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:50:45 localhost podman[102377]: 2026-02-20 08:50:45.554885928 +0000 UTC m=+0.190959919 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, version=17.1.13, distribution-scope=public, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., architecture=x86_64, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, release=1766032510) Feb 20 03:50:45 localhost podman[102379]: 2026-02-20 08:50:45.598991777 +0000 UTC m=+0.229245332 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:50:45 localhost podman[102377]: 2026-02-20 08:50:45.607898446 +0000 UTC m=+0.243972407 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:50:45 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:50:45 localhost podman[102379]: 2026-02-20 08:50:45.650683711 +0000 UTC m=+0.280937246 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:50:45 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:50:45 localhost podman[102376]: 2026-02-20 08:50:45.70112151 +0000 UTC m=+0.340353195 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Feb 20 03:50:45 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:50:46 localhost systemd[1]: tmp-crun.rPNgv7.mount: Deactivated successfully. Feb 20 03:50:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:50:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:50:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:50:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:50:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:50:48 localhost podman[102475]: 2026-02-20 08:50:48.433160231 +0000 UTC m=+0.073122266 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 20 03:50:48 localhost podman[102476]: 2026-02-20 08:50:48.448043349 +0000 UTC m=+0.085496437 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, config_id=tripleo_step3, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, batch=17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, release=1766032510) Feb 20 03:50:48 localhost podman[102476]: 2026-02-20 08:50:48.461677504 +0000 UTC m=+0.099130612 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, vcs-type=git, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:50:48 localhost systemd[1]: tmp-crun.ry9XNE.mount: Deactivated successfully. Feb 20 03:50:48 localhost podman[102484]: 2026-02-20 08:50:48.512452972 +0000 UTC m=+0.141960798 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, release=1766032510, config_id=tripleo_step5, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, distribution-scope=public, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:50:48 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:50:48 localhost podman[102475]: 2026-02-20 08:50:48.529710673 +0000 UTC m=+0.169672708 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:50:48 localhost podman[102475]: unhealthy Feb 20 03:50:48 localhost podman[102484]: 2026-02-20 08:50:48.537322847 +0000 UTC m=+0.166830693 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, tcib_managed=true, io.openshift.expose-services=, container_name=nova_compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, release=1766032510, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public) Feb 20 03:50:48 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:50:48 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:50:48 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:50:48 localhost podman[102477]: 2026-02-20 08:50:48.602795558 +0000 UTC m=+0.237221675 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc.) Feb 20 03:50:48 localhost podman[102478]: 2026-02-20 08:50:48.646540208 +0000 UTC m=+0.278433077 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13) Feb 20 03:50:48 localhost podman[102478]: 2026-02-20 08:50:48.659147425 +0000 UTC m=+0.291040384 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public) Feb 20 03:50:48 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:50:48 localhost podman[102477]: 2026-02-20 08:50:48.671996639 +0000 UTC m=+0.306422776 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, config_id=tripleo_step4, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 20 03:50:48 localhost podman[102477]: unhealthy Feb 20 03:50:48 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:50:48 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:50:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:50:53 localhost podman[102578]: 2026-02-20 08:50:53.442658299 +0000 UTC m=+0.081691786 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, release=1766032510, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.openshift.expose-services=, version=17.1.13, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=nova_migration_target) Feb 20 03:50:53 localhost podman[102578]: 2026-02-20 08:50:53.833649807 +0000 UTC m=+0.472683244 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, release=1766032510, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.13, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5) Feb 20 03:50:53 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:51:12 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:51:12 localhost recover_tripleo_nova_virtqemud[102602]: 63703 Feb 20 03:51:12 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:51:12 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:51:15 localhost sshd[102603]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:51:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:51:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:51:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:51:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:51:16 localhost podman[102606]: 2026-02-20 08:51:16.008634187 +0000 UTC m=+0.093136192 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team) Feb 20 03:51:16 localhost podman[102606]: 2026-02-20 08:51:16.0379043 +0000 UTC m=+0.122406325 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git) Feb 20 03:51:16 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:51:16 localhost podman[102608]: 2026-02-20 08:51:16.058062199 +0000 UTC m=+0.135824874 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, version=17.1.13, release=1766032510, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=) Feb 20 03:51:16 localhost podman[102605]: 2026-02-20 08:51:16.116188294 +0000 UTC m=+0.202188129 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:51:16 localhost podman[102607]: 2026-02-20 08:51:16.152589158 +0000 UTC m=+0.233691112 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, distribution-scope=public, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:51:16 localhost podman[102607]: 2026-02-20 08:51:16.156129322 +0000 UTC m=+0.237231286 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, architecture=x86_64, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com) Feb 20 03:51:16 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:51:16 localhost podman[102608]: 2026-02-20 08:51:16.18930659 +0000 UTC m=+0.267069445 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 20 03:51:16 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:51:16 localhost podman[102605]: 2026-02-20 08:51:16.307792998 +0000 UTC m=+0.393792833 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, managed_by=tripleo_ansible, release=1766032510, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:51:16 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:51:19 localhost podman[102707]: 2026-02-20 08:51:19.475809421 +0000 UTC m=+0.109740656 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, version=17.1.13, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, vcs-type=git) Feb 20 03:51:19 localhost podman[102707]: 2026-02-20 08:51:19.518738969 +0000 UTC m=+0.152670184 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, container_name=iscsid, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, config_id=tripleo_step3, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team) Feb 20 03:51:19 localhost systemd[1]: tmp-crun.UIXmMn.mount: Deactivated successfully. Feb 20 03:51:19 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:51:19 localhost podman[102716]: 2026-02-20 08:51:19.561952945 +0000 UTC m=+0.186751296 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, url=https://www.redhat.com, config_id=tripleo_step5, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Feb 20 03:51:19 localhost podman[102716]: 2026-02-20 08:51:19.585660109 +0000 UTC m=+0.210458460 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, container_name=nova_compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:51:19 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:51:19 localhost podman[102709]: 2026-02-20 08:51:19.623266145 +0000 UTC m=+0.248153798 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, release=1766032510, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:51:19 localhost podman[102709]: 2026-02-20 08:51:19.640699131 +0000 UTC m=+0.265586754 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:51:19 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:51:19 localhost podman[102708]: 2026-02-20 08:51:19.658743544 +0000 UTC m=+0.289621767 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:51:19 localhost podman[102708]: 2026-02-20 08:51:19.671804513 +0000 UTC m=+0.302682786 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, release=1766032510, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Feb 20 03:51:19 localhost podman[102708]: unhealthy Feb 20 03:51:19 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:51:19 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:51:19 localhost podman[102706]: 2026-02-20 08:51:19.536777392 +0000 UTC m=+0.172666590 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, batch=17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:51:19 localhost podman[102706]: 2026-02-20 08:51:19.718819251 +0000 UTC m=+0.354708449 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, release=1766032510, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc.) Feb 20 03:51:19 localhost podman[102706]: unhealthy Feb 20 03:51:19 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:51:19 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:51:22 localhost sshd[102807]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:51:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:51:24 localhost systemd[1]: tmp-crun.12hMsl.mount: Deactivated successfully. Feb 20 03:51:24 localhost podman[102809]: 2026-02-20 08:51:24.431195549 +0000 UTC m=+0.074177055 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-type=git, tcib_managed=true, release=1766032510, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container) Feb 20 03:51:24 localhost podman[102809]: 2026-02-20 08:51:24.824789056 +0000 UTC m=+0.467770562 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1766032510, io.openshift.expose-services=, version=17.1.13, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:51:24 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:51:43 localhost sshd[102961]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:51:43 localhost sshd[102962]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:51:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:51:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:51:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:51:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:51:46 localhost podman[102966]: 2026-02-20 08:51:46.461802317 +0000 UTC m=+0.094029166 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5) Feb 20 03:51:46 localhost podman[102965]: 2026-02-20 08:51:46.493916797 +0000 UTC m=+0.131404656 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510) Feb 20 03:51:46 localhost podman[102967]: 2026-02-20 08:51:46.51161731 +0000 UTC m=+0.140622212 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, batch=17.1_20260112.1, release=1766032510, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:51:46 localhost podman[102967]: 2026-02-20 08:51:46.522643085 +0000 UTC m=+0.151648007 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp-rhel9/openstack-cron, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, container_name=logrotate_crond, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, managed_by=tripleo_ansible, release=1766032510, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Feb 20 03:51:46 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:51:46 localhost podman[102966]: 2026-02-20 08:51:46.564352131 +0000 UTC m=+0.196579050 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Feb 20 03:51:46 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:51:46 localhost podman[102973]: 2026-02-20 08:51:46.564148625 +0000 UTC m=+0.188205175 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, version=17.1.13, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:51:46 localhost podman[102973]: 2026-02-20 08:51:46.647694349 +0000 UTC m=+0.271750839 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:51:46 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:51:46 localhost podman[102965]: 2026-02-20 08:51:46.724611707 +0000 UTC m=+0.362099526 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, build-date=2026-01-12T22:10:14Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, release=1766032510, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1) Feb 20 03:51:46 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:51:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:51:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:51:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:51:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:51:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:51:50 localhost podman[103063]: 2026-02-20 08:51:50.445550579 +0000 UTC m=+0.085637511 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:51:50 localhost podman[103064]: 2026-02-20 08:51:50.490630935 +0000 UTC m=+0.127153532 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid) Feb 20 03:51:50 localhost podman[103064]: 2026-02-20 08:51:50.502155143 +0000 UTC m=+0.138677820 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13) Feb 20 03:51:50 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:51:50 localhost podman[103066]: 2026-02-20 08:51:50.462938954 +0000 UTC m=+0.093109051 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, container_name=collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, version=17.1.13, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:51:50 localhost podman[103063]: 2026-02-20 08:51:50.5371878 +0000 UTC m=+0.177274762 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, architecture=x86_64, container_name=ovn_metadata_agent, batch=17.1_20260112.1, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Feb 20 03:51:50 localhost podman[103063]: unhealthy Feb 20 03:51:50 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:51:50 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:51:50 localhost podman[103071]: 2026-02-20 08:51:50.584841865 +0000 UTC m=+0.213283616 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, version=17.1.13, container_name=nova_compute, batch=17.1_20260112.1, release=1766032510, distribution-scope=public, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 20 03:51:50 localhost podman[103066]: 2026-02-20 08:51:50.600116623 +0000 UTC m=+0.230286730 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, version=17.1.13, distribution-scope=public, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:51:50 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:51:50 localhost podman[103065]: 2026-02-20 08:51:50.616722477 +0000 UTC m=+0.250044178 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, container_name=ovn_controller, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z) Feb 20 03:51:50 localhost podman[103065]: 2026-02-20 08:51:50.63664784 +0000 UTC m=+0.269969571 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, tcib_managed=true, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5) Feb 20 03:51:50 localhost podman[103065]: unhealthy Feb 20 03:51:50 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:51:50 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:51:50 localhost podman[103071]: 2026-02-20 08:51:50.688542398 +0000 UTC m=+0.316984149 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:51:50 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:51:51 localhost sshd[103169]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:51:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:51:55 localhost podman[103171]: 2026-02-20 08:51:55.441161314 +0000 UTC m=+0.080508774 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public) Feb 20 03:51:55 localhost podman[103171]: 2026-02-20 08:51:55.840934027 +0000 UTC m=+0.480281457 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public) Feb 20 03:51:55 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:52:14 localhost sshd[103194]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:52:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:52:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:52:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:52:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:52:17 localhost podman[103199]: 2026-02-20 08:52:17.443209445 +0000 UTC m=+0.074801951 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1766032510, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5) Feb 20 03:52:17 localhost podman[103199]: 2026-02-20 08:52:17.499586563 +0000 UTC m=+0.131179069 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, io.buildah.version=1.41.5, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team) Feb 20 03:52:17 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:52:17 localhost podman[103198]: 2026-02-20 08:52:17.501022532 +0000 UTC m=+0.132775192 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc.) Feb 20 03:52:17 localhost podman[103197]: 2026-02-20 08:52:17.560349729 +0000 UTC m=+0.195382837 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:52:17 localhost podman[103198]: 2026-02-20 08:52:17.582177543 +0000 UTC m=+0.213930203 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 20 03:52:17 localhost podman[103197]: 2026-02-20 08:52:17.593234599 +0000 UTC m=+0.228267677 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:52:17 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:52:17 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:52:17 localhost podman[103196]: 2026-02-20 08:52:17.65797898 +0000 UTC m=+0.295546165 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, architecture=x86_64, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1) Feb 20 03:52:17 localhost podman[103196]: 2026-02-20 08:52:17.883458181 +0000 UTC m=+0.521025356 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1766032510, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:52:17 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:52:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:52:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:52:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:52:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:52:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:52:21 localhost podman[103297]: 2026-02-20 08:52:21.460075882 +0000 UTC m=+0.096426280 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, config_id=tripleo_step3, vendor=Red Hat, Inc., release=1766032510, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 20 03:52:21 localhost podman[103297]: 2026-02-20 08:52:21.475684449 +0000 UTC m=+0.112034837 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:52:21 localhost podman[103296]: 2026-02-20 08:52:21.518475124 +0000 UTC m=+0.154932505 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.openshift.expose-services=, release=1766032510, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-type=git, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:52:21 localhost podman[103298]: 2026-02-20 08:52:21.478025072 +0000 UTC m=+0.108584045 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team) Feb 20 03:52:21 localhost podman[103298]: 2026-02-20 08:52:21.561205516 +0000 UTC m=+0.191764539 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, release=1766032510, container_name=ovn_controller, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Feb 20 03:52:21 localhost podman[103298]: unhealthy Feb 20 03:52:21 localhost podman[103303]: 2026-02-20 08:52:21.572094058 +0000 UTC m=+0.197209336 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:52:21 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:52:21 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:52:21 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:52:21 localhost podman[103303]: 2026-02-20 08:52:21.610861055 +0000 UTC m=+0.235976293 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:52:21 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:52:21 localhost podman[103310]: 2026-02-20 08:52:21.629136734 +0000 UTC m=+0.248982761 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64) Feb 20 03:52:21 localhost podman[103310]: 2026-02-20 08:52:21.656667809 +0000 UTC m=+0.276513866 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_id=tripleo_step5, container_name=nova_compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:52:21 localhost podman[103296]: 2026-02-20 08:52:21.666744079 +0000 UTC m=+0.303201440 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, version=17.1.13, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:52:21 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:52:21 localhost podman[103296]: unhealthy Feb 20 03:52:21 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:52:21 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:52:22 localhost systemd[1]: tmp-crun.hxBuTb.mount: Deactivated successfully. Feb 20 03:52:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:52:26 localhost podman[103400]: 2026-02-20 08:52:26.437743266 +0000 UTC m=+0.077770531 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, container_name=nova_migration_target) Feb 20 03:52:26 localhost podman[103400]: 2026-02-20 08:52:26.814966816 +0000 UTC m=+0.454994081 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, release=1766032510, batch=17.1_20260112.1, container_name=nova_migration_target, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:52:26 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:52:29 localhost sshd[103424]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:52:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:52:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:52:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:52:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:52:48 localhost systemd[1]: tmp-crun.Mnf94f.mount: Deactivated successfully. Feb 20 03:52:48 localhost podman[103505]: 2026-02-20 08:52:48.448214035 +0000 UTC m=+0.079486471 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 20 03:52:48 localhost podman[103512]: 2026-02-20 08:52:48.507946138 +0000 UTC m=+0.132442074 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, config_id=tripleo_step4, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 20 03:52:48 localhost podman[103504]: 2026-02-20 08:52:48.477150207 +0000 UTC m=+0.112481062 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:52:48 localhost podman[103512]: 2026-02-20 08:52:48.561063235 +0000 UTC m=+0.185559201 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:52:48 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:52:48 localhost podman[103506]: 2026-02-20 08:52:48.615747035 +0000 UTC m=+0.242683116 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, release=1766032510, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Feb 20 03:52:48 localhost podman[103505]: 2026-02-20 08:52:48.641643115 +0000 UTC m=+0.272915561 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4) Feb 20 03:52:48 localhost podman[103506]: 2026-02-20 08:52:48.648956721 +0000 UTC m=+0.275892782 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., container_name=logrotate_crond, com.redhat.component=openstack-cron-container, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1) Feb 20 03:52:48 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:52:48 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:52:48 localhost podman[103504]: 2026-02-20 08:52:48.675858048 +0000 UTC m=+0.311188943 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, version=17.1.13, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:52:48 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:52:48 localhost sshd[103604]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:52:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:52:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:52:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:52:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:52:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:52:52 localhost podman[103608]: 2026-02-20 08:52:52.4632697 +0000 UTC m=+0.096605248 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, batch=17.1_20260112.1, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true) Feb 20 03:52:52 localhost podman[103608]: 2026-02-20 08:52:52.505769665 +0000 UTC m=+0.139105163 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z) Feb 20 03:52:52 localhost podman[103608]: unhealthy Feb 20 03:52:52 localhost podman[103606]: 2026-02-20 08:52:52.514002693 +0000 UTC m=+0.152691704 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 20 03:52:52 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:52:52 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:52:52 localhost systemd[1]: tmp-crun.wLsivd.mount: Deactivated successfully. Feb 20 03:52:52 localhost podman[103607]: 2026-02-20 08:52:52.563651498 +0000 UTC m=+0.201980939 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1766032510, com.redhat.component=openstack-iscsid-container, distribution-scope=public, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid) Feb 20 03:52:52 localhost podman[103606]: 2026-02-20 08:52:52.566901145 +0000 UTC m=+0.205590166 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20260112.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public) Feb 20 03:52:52 localhost podman[103606]: unhealthy Feb 20 03:52:52 localhost podman[103607]: 2026-02-20 08:52:52.57684647 +0000 UTC m=+0.215175872 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:52:52 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:52:52 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:52:52 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:52:52 localhost podman[103609]: 2026-02-20 08:52:52.671072275 +0000 UTC m=+0.300502599 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, config_id=tripleo_step3, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vcs-type=git, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, container_name=collectd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5) Feb 20 03:52:52 localhost podman[103609]: 2026-02-20 08:52:52.706956781 +0000 UTC m=+0.336387105 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_id=tripleo_step3, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team) Feb 20 03:52:52 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:52:52 localhost podman[103620]: 2026-02-20 08:52:52.725898157 +0000 UTC m=+0.352532736 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.13, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:52:52 localhost podman[103620]: 2026-02-20 08:52:52.75677236 +0000 UTC m=+0.383406949 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510) Feb 20 03:52:52 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:52:54 localhost sshd[103709]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:52:54 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:52:54 localhost recover_tripleo_nova_virtqemud[103712]: 63703 Feb 20 03:52:54 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:52:54 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:52:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:52:57 localhost podman[103713]: 2026-02-20 08:52:57.441533362 +0000 UTC m=+0.081612019 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:52:57 localhost podman[103713]: 2026-02-20 08:52:57.84291135 +0000 UTC m=+0.482990007 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc.) Feb 20 03:52:57 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:53:07 localhost sshd[103736]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:53:14 localhost sshd[103738]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:53:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:53:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:53:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:53:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:53:19 localhost podman[103740]: 2026-02-20 08:53:19.450516532 +0000 UTC m=+0.083070137 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, release=1766032510, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:53:19 localhost podman[103742]: 2026-02-20 08:53:19.502272833 +0000 UTC m=+0.129626929 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, version=17.1.13, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Feb 20 03:53:19 localhost podman[103742]: 2026-02-20 08:53:19.514714985 +0000 UTC m=+0.142069041 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, container_name=logrotate_crond, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, batch=17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.13) Feb 20 03:53:19 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:53:19 localhost podman[103741]: 2026-02-20 08:53:19.554316321 +0000 UTC m=+0.182829209 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=) Feb 20 03:53:19 localhost podman[103741]: 2026-02-20 08:53:19.582624097 +0000 UTC m=+0.211136954 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Feb 20 03:53:19 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:53:19 localhost podman[103743]: 2026-02-20 08:53:19.660626418 +0000 UTC m=+0.283569186 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, distribution-scope=public) Feb 20 03:53:19 localhost podman[103740]: 2026-02-20 08:53:19.691010569 +0000 UTC m=+0.323564204 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:53:19 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:53:19 localhost podman[103743]: 2026-02-20 08:53:19.713779226 +0000 UTC m=+0.336722004 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, build-date=2026-01-12T23:07:30Z, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Feb 20 03:53:19 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:53:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:53:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:53:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:53:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:53:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:53:23 localhost systemd[1]: tmp-crun.OAJQYE.mount: Deactivated successfully. Feb 20 03:53:23 localhost podman[103838]: 2026-02-20 08:53:23.463863992 +0000 UTC m=+0.093212518 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git) Feb 20 03:53:23 localhost systemd[1]: tmp-crun.QVvOmg.mount: Deactivated successfully. Feb 20 03:53:23 localhost podman[103838]: 2026-02-20 08:53:23.506561701 +0000 UTC m=+0.135910187 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13) Feb 20 03:53:23 localhost podman[103836]: 2026-02-20 08:53:23.506578371 +0000 UTC m=+0.141447995 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, tcib_managed=true, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13) Feb 20 03:53:23 localhost podman[103838]: unhealthy Feb 20 03:53:23 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:53:23 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:53:23 localhost podman[103839]: 2026-02-20 08:53:23.524617852 +0000 UTC m=+0.147719422 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:53:23 localhost podman[103845]: 2026-02-20 08:53:23.563673114 +0000 UTC m=+0.186637259 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, batch=17.1_20260112.1, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, distribution-scope=public) Feb 20 03:53:23 localhost podman[103836]: 2026-02-20 08:53:23.591122507 +0000 UTC m=+0.225992081 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:53:23 localhost podman[103836]: unhealthy Feb 20 03:53:23 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:53:23 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:53:23 localhost podman[103839]: 2026-02-20 08:53:23.608769267 +0000 UTC m=+0.231870797 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd) Feb 20 03:53:23 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:53:23 localhost podman[103845]: 2026-02-20 08:53:23.642445836 +0000 UTC m=+0.265409941 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, config_id=tripleo_step5, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Feb 20 03:53:23 localhost podman[103837]: 2026-02-20 08:53:23.666343424 +0000 UTC m=+0.297268512 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:53:23 localhost podman[103837]: 2026-02-20 08:53:23.703820803 +0000 UTC m=+0.334745921 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, config_id=tripleo_step3, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, container_name=iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:53:23 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:53:23 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:53:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:53:28 localhost systemd[1]: tmp-crun.JlbgpU.mount: Deactivated successfully. Feb 20 03:53:28 localhost podman[103942]: 2026-02-20 08:53:28.446048228 +0000 UTC m=+0.085449181 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_id=tripleo_step4, tcib_managed=true, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z) Feb 20 03:53:28 localhost podman[103942]: 2026-02-20 08:53:28.810263175 +0000 UTC m=+0.449664128 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, release=1766032510, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:53:28 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:53:32 localhost sshd[103963]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:53:38 localhost podman[104064]: 2026-02-20 08:53:38.673105169 +0000 UTC m=+0.085741919 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 03:53:38 localhost podman[104064]: 2026-02-20 08:53:38.761452135 +0000 UTC m=+0.174088955 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux , name=rhceph, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 03:53:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:53:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:53:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:53:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:53:50 localhost systemd[1]: tmp-crun.15rIve.mount: Deactivated successfully. Feb 20 03:53:50 localhost podman[104205]: 2026-02-20 08:53:50.45056508 +0000 UTC m=+0.085264596 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, maintainer=OpenStack TripleO Team) Feb 20 03:53:50 localhost podman[104206]: 2026-02-20 08:53:50.513635222 +0000 UTC m=+0.143935360 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.13, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64) Feb 20 03:53:50 localhost podman[104207]: 2026-02-20 08:53:50.562597109 +0000 UTC m=+0.191716536 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, tcib_managed=true) Feb 20 03:53:50 localhost podman[104208]: 2026-02-20 08:53:50.475537757 +0000 UTC m=+0.100850912 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.13) Feb 20 03:53:50 localhost podman[104206]: 2026-02-20 08:53:50.588529571 +0000 UTC m=+0.218829669 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.expose-services=) Feb 20 03:53:50 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:53:50 localhost podman[104207]: 2026-02-20 08:53:50.601693782 +0000 UTC m=+0.230813209 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, batch=17.1_20260112.1, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:53:50 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:53:50 localhost podman[104208]: 2026-02-20 08:53:50.657501841 +0000 UTC m=+0.282814966 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:53:50 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:53:50 localhost podman[104205]: 2026-02-20 08:53:50.710536466 +0000 UTC m=+0.345235972 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, distribution-scope=public, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git) Feb 20 03:53:50 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:53:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:53:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:53:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:53:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:53:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:53:54 localhost podman[104326]: 2026-02-20 08:53:54.456119731 +0000 UTC m=+0.078801113 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, release=1766032510, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:53:54 localhost podman[104326]: 2026-02-20 08:53:54.499902809 +0000 UTC m=+0.122584241 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, container_name=nova_compute, distribution-scope=public, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:53:54 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:53:54 localhost podman[104313]: 2026-02-20 08:53:54.502119958 +0000 UTC m=+0.135442224 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z) Feb 20 03:53:54 localhost podman[104318]: 2026-02-20 08:53:54.558076771 +0000 UTC m=+0.182945631 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, tcib_managed=true, container_name=collectd, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:53:54 localhost podman[104318]: 2026-02-20 08:53:54.567077441 +0000 UTC m=+0.191946311 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, distribution-scope=public, container_name=collectd, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5) Feb 20 03:53:54 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:53:54 localhost podman[104312]: 2026-02-20 08:53:54.609573884 +0000 UTC m=+0.243957999 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public) Feb 20 03:53:54 localhost podman[104312]: 2026-02-20 08:53:54.627908954 +0000 UTC m=+0.262293069 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Feb 20 03:53:54 localhost podman[104313]: 2026-02-20 08:53:54.632171058 +0000 UTC m=+0.265493344 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, release=1766032510, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, architecture=x86_64) Feb 20 03:53:54 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:53:54 localhost podman[104312]: unhealthy Feb 20 03:53:54 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:53:54 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:53:54 localhost podman[104314]: 2026-02-20 08:53:54.766663296 +0000 UTC m=+0.394481855 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:53:54 localhost podman[104314]: 2026-02-20 08:53:54.783262268 +0000 UTC m=+0.411080807 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, architecture=x86_64, version=17.1.13, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, release=1766032510) Feb 20 03:53:54 localhost podman[104314]: unhealthy Feb 20 03:53:54 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:53:54 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:53:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:53:59 localhost podman[104420]: 2026-02-20 08:53:59.441657016 +0000 UTC m=+0.082140482 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, container_name=nova_migration_target, url=https://www.redhat.com, distribution-scope=public, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z) Feb 20 03:53:59 localhost podman[104420]: 2026-02-20 08:53:59.81100894 +0000 UTC m=+0.451492366 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, version=17.1.13, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:53:59 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:54:02 localhost sshd[104443]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:54:02 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:54:03 localhost recover_tripleo_nova_virtqemud[104445]: 63703 Feb 20 03:54:03 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:54:03 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:54:14 localhost sshd[104447]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:54:20 localhost sshd[104449]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:54:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:54:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:54:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:54:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:54:21 localhost systemd[1]: tmp-crun.KAvUR2.mount: Deactivated successfully. Feb 20 03:54:21 localhost podman[104454]: 2026-02-20 08:54:21.490588622 +0000 UTC m=+0.120924566 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, architecture=x86_64) Feb 20 03:54:21 localhost podman[104453]: 2026-02-20 08:54:21.457467789 +0000 UTC m=+0.085404399 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1766032510, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, version=17.1.13, batch=17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:54:21 localhost podman[104454]: 2026-02-20 08:54:21.536665952 +0000 UTC m=+0.167001906 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible) Feb 20 03:54:21 localhost podman[104452]: 2026-02-20 08:54:21.493170351 +0000 UTC m=+0.127817241 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, version=17.1.13, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Feb 20 03:54:21 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:54:21 localhost podman[104453]: 2026-02-20 08:54:21.592251564 +0000 UTC m=+0.220188204 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, version=17.1.13, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:54:21 localhost podman[104451]: 2026-02-20 08:54:21.594077613 +0000 UTC m=+0.228531078 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, release=1766032510, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, version=17.1.13, container_name=metrics_qdr) Feb 20 03:54:21 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:54:21 localhost podman[104452]: 2026-02-20 08:54:21.626055146 +0000 UTC m=+0.260702106 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5) Feb 20 03:54:21 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:54:21 localhost podman[104451]: 2026-02-20 08:54:21.774936999 +0000 UTC m=+0.409390474 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, config_id=tripleo_step1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:54:21 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:54:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:54:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:54:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:54:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:54:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:54:25 localhost systemd[1]: tmp-crun.cwHFOJ.mount: Deactivated successfully. Feb 20 03:54:25 localhost podman[104552]: 2026-02-20 08:54:25.464463029 +0000 UTC m=+0.100902593 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, vcs-type=git, maintainer=OpenStack TripleO Team) Feb 20 03:54:25 localhost podman[104551]: 2026-02-20 08:54:25.507917458 +0000 UTC m=+0.145724109 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 20 03:54:25 localhost podman[104551]: 2026-02-20 08:54:25.526869104 +0000 UTC m=+0.164675785 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4) Feb 20 03:54:25 localhost podman[104551]: unhealthy Feb 20 03:54:25 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:54:25 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:54:25 localhost podman[104552]: 2026-02-20 08:54:25.560017078 +0000 UTC m=+0.196456632 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible) Feb 20 03:54:25 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:54:26 localhost podman[104564]: 2026-02-20 08:54:26.01702823 +0000 UTC m=+0.643716664 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, architecture=x86_64, batch=17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.13, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:54:26 localhost podman[104554]: 2026-02-20 08:54:26.064477146 +0000 UTC m=+0.694308384 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, tcib_managed=true, release=1766032510, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:54:26 localhost podman[104564]: 2026-02-20 08:54:26.072796648 +0000 UTC m=+0.699485032 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, config_id=tripleo_step5, vcs-type=git, build-date=2026-01-12T23:32:04Z, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Feb 20 03:54:26 localhost podman[104554]: 2026-02-20 08:54:26.101711239 +0000 UTC m=+0.731542497 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.expose-services=) Feb 20 03:54:26 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:54:26 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:54:26 localhost podman[104553]: 2026-02-20 08:54:26.120011958 +0000 UTC m=+0.753993566 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, vcs-type=git, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:54:26 localhost podman[104553]: 2026-02-20 08:54:26.138823429 +0000 UTC m=+0.772805047 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z) Feb 20 03:54:26 localhost podman[104553]: unhealthy Feb 20 03:54:26 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:54:26 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:54:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:54:30 localhost podman[104654]: 2026-02-20 08:54:30.437787377 +0000 UTC m=+0.077068247 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4) Feb 20 03:54:30 localhost podman[104654]: 2026-02-20 08:54:30.831779159 +0000 UTC m=+0.471059949 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, release=1766032510, io.buildah.version=1.41.5, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z) Feb 20 03:54:30 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:54:48 localhost sshd[104754]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:54:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:54:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:54:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:54:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:54:52 localhost podman[104756]: 2026-02-20 08:54:52.427836944 +0000 UTC m=+0.065937571 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, release=1766032510, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, tcib_managed=true, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container) Feb 20 03:54:52 localhost podman[104757]: 2026-02-20 08:54:52.440342367 +0000 UTC m=+0.070880792 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:54:52 localhost podman[104757]: 2026-02-20 08:54:52.464670156 +0000 UTC m=+0.095208601 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20260112.1, distribution-scope=public, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container) Feb 20 03:54:52 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Deactivated successfully. Feb 20 03:54:52 localhost podman[104763]: 2026-02-20 08:54:52.554118592 +0000 UTC m=+0.181434491 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible) Feb 20 03:54:52 localhost podman[104758]: 2026-02-20 08:54:52.603359206 +0000 UTC m=+0.231590829 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public) Feb 20 03:54:52 localhost podman[104758]: 2026-02-20 08:54:52.610916517 +0000 UTC m=+0.239148210 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:54:52 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:54:52 localhost podman[104756]: 2026-02-20 08:54:52.625826945 +0000 UTC m=+0.263927572 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, architecture=x86_64, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com) Feb 20 03:54:52 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:54:52 localhost podman[104763]: 2026-02-20 08:54:52.661592769 +0000 UTC m=+0.288908688 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, url=https://www.redhat.com, version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc.) Feb 20 03:54:52 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:54:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:54:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:54:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:54:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:54:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:54:56 localhost podman[104854]: 2026-02-20 08:54:56.441292935 +0000 UTC m=+0.078339771 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1766032510, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:54:56 localhost podman[104854]: 2026-02-20 08:54:56.453010828 +0000 UTC m=+0.090057724 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc.) Feb 20 03:54:56 localhost podman[104854]: unhealthy Feb 20 03:54:56 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:54:56 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:54:56 localhost podman[104857]: 2026-02-20 08:54:56.509262839 +0000 UTC m=+0.138483606 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, release=1766032510, managed_by=tripleo_ansible, version=17.1.13) Feb 20 03:54:56 localhost podman[104855]: 2026-02-20 08:54:56.556434297 +0000 UTC m=+0.190788761 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, version=17.1.13, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=) Feb 20 03:54:56 localhost podman[104855]: 2026-02-20 08:54:56.569767743 +0000 UTC m=+0.204122207 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, version=17.1.13, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid) Feb 20 03:54:56 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:54:56 localhost podman[104856]: 2026-02-20 08:54:56.654943506 +0000 UTC m=+0.286569247 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, container_name=ovn_controller, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, release=1766032510, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team) Feb 20 03:54:56 localhost podman[104856]: 2026-02-20 08:54:56.669240947 +0000 UTC m=+0.300866688 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git) Feb 20 03:54:56 localhost podman[104856]: unhealthy Feb 20 03:54:56 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:54:56 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:54:56 localhost podman[104863]: 2026-02-20 08:54:56.709939492 +0000 UTC m=+0.336018855 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1) Feb 20 03:54:56 localhost podman[104857]: 2026-02-20 08:54:56.728876278 +0000 UTC m=+0.358097055 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd) Feb 20 03:54:56 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:54:56 localhost podman[104863]: 2026-02-20 08:54:56.766062789 +0000 UTC m=+0.392142152 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, config_id=tripleo_step5, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:54:56 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Deactivated successfully. Feb 20 03:55:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:55:01 localhost podman[104958]: 2026-02-20 08:55:01.437350422 +0000 UTC m=+0.076480263 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, vcs-type=git, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container) Feb 20 03:55:01 localhost podman[104958]: 2026-02-20 08:55:01.83274419 +0000 UTC m=+0.471874011 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:55:01 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:55:03 localhost sshd[104982]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:55:12 localhost sshd[104984]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:55:12 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:55:13 localhost recover_tripleo_nova_virtqemud[104987]: 63703 Feb 20 03:55:13 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:55:13 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:55:20 localhost sshd[104988]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:55:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:55:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:55:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:55:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:55:23 localhost systemd[1]: tmp-crun.RK0fPx.mount: Deactivated successfully. Feb 20 03:55:23 localhost podman[104991]: 2026-02-20 08:55:23.50247137 +0000 UTC m=+0.136428570 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1766032510, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13) Feb 20 03:55:23 localhost podman[104992]: 2026-02-20 08:55:23.55420598 +0000 UTC m=+0.185219582 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, version=17.1.13, distribution-scope=public, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, container_name=logrotate_crond) Feb 20 03:55:23 localhost podman[104991]: 2026-02-20 08:55:23.561779342 +0000 UTC m=+0.195736562 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, version=17.1.13, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1766032510, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:55:23 localhost podman[104991]: unhealthy Feb 20 03:55:23 localhost podman[104990]: 2026-02-20 08:55:23.468514474 +0000 UTC m=+0.103592965 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, version=17.1.13, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1) Feb 20 03:55:23 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:55:23 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Failed with result 'exit-code'. Feb 20 03:55:23 localhost podman[104992]: 2026-02-20 08:55:23.593832448 +0000 UTC m=+0.224846090 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true) Feb 20 03:55:23 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:55:23 localhost podman[104993]: 2026-02-20 08:55:23.66252455 +0000 UTC m=+0.290653405 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, config_id=tripleo_step4, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, distribution-scope=public, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container) Feb 20 03:55:23 localhost podman[104990]: 2026-02-20 08:55:23.691869863 +0000 UTC m=+0.326948314 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, config_id=tripleo_step1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git) Feb 20 03:55:23 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:55:23 localhost podman[104993]: 2026-02-20 08:55:23.713692965 +0000 UTC m=+0.341821850 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.13, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com) Feb 20 03:55:23 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:55:27 localhost podman[105091]: 2026-02-20 08:55:27.464934131 +0000 UTC m=+0.090933427 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5) Feb 20 03:55:27 localhost podman[105091]: 2026-02-20 08:55:27.510849986 +0000 UTC m=+0.136849252 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team) Feb 20 03:55:27 localhost podman[105091]: unhealthy Feb 20 03:55:27 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:55:27 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:55:27 localhost podman[105089]: 2026-02-20 08:55:27.520948096 +0000 UTC m=+0.150823595 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, container_name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:55:27 localhost podman[105089]: 2026-02-20 08:55:27.532237067 +0000 UTC m=+0.162112526 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, release=1766032510, architecture=x86_64, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:55:27 localhost podman[105089]: unhealthy Feb 20 03:55:27 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:55:27 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:55:27 localhost podman[105090]: 2026-02-20 08:55:27.572027868 +0000 UTC m=+0.198550218 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, version=17.1.13, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, release=1766032510, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container) Feb 20 03:55:27 localhost podman[105092]: 2026-02-20 08:55:27.623258605 +0000 UTC m=+0.247186816 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.buildah.version=1.41.5, container_name=collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com) Feb 20 03:55:27 localhost podman[105094]: 2026-02-20 08:55:27.632120711 +0000 UTC m=+0.248754457 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:55:27 localhost podman[105092]: 2026-02-20 08:55:27.63281845 +0000 UTC m=+0.256746621 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.5, architecture=x86_64, container_name=collectd, url=https://www.redhat.com, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13) Feb 20 03:55:27 localhost podman[105094]: 2026-02-20 08:55:27.650784799 +0000 UTC m=+0.267418515 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.5, batch=17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T23:32:04Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:55:27 localhost podman[105094]: unhealthy Feb 20 03:55:27 localhost podman[105090]: 2026-02-20 08:55:27.663988952 +0000 UTC m=+0.290511272 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, distribution-scope=public, managed_by=tripleo_ansible, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13) Feb 20 03:55:27 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:55:27 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:55:27 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:55:27 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:55:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:55:32 localhost podman[105185]: 2026-02-20 08:55:32.430706819 +0000 UTC m=+0.071033556 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:55:32 localhost systemd[1]: tmp-crun.gK3asz.mount: Deactivated successfully. Feb 20 03:55:32 localhost podman[105185]: 2026-02-20 08:55:32.815879345 +0000 UTC m=+0.456206112 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, managed_by=tripleo_ansible, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20260112.1, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:55:32 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:55:49 localhost sshd[105284]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:55:54 localhost sshd[105286]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:55:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:55:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:55:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:55:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:55:54 localhost podman[105287]: 2026-02-20 08:55:54.458370454 +0000 UTC m=+0.091608094 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, container_name=metrics_qdr, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, release=1766032510, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 20 03:55:54 localhost systemd[1]: tmp-crun.lwUqVw.mount: Deactivated successfully. Feb 20 03:55:54 localhost podman[105290]: 2026-02-20 08:55:54.516032593 +0000 UTC m=+0.146373796 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:55:54 localhost podman[105290]: 2026-02-20 08:55:54.552118336 +0000 UTC m=+0.182459579 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, release=1766032510, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, container_name=logrotate_crond, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=) Feb 20 03:55:54 localhost podman[105291]: 2026-02-20 08:55:54.56278375 +0000 UTC m=+0.188070648 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:55:54 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:55:54 localhost podman[105291]: 2026-02-20 08:55:54.593354366 +0000 UTC m=+0.218641204 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, release=1766032510, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:55:54 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:55:54 localhost podman[105289]: 2026-02-20 08:55:54.617255583 +0000 UTC m=+0.249544628 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1) Feb 20 03:55:54 localhost podman[105289]: 2026-02-20 08:55:54.644899711 +0000 UTC m=+0.277188746 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1766032510, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com) Feb 20 03:55:54 localhost podman[105289]: unhealthy Feb 20 03:55:54 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:55:54 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Failed with result 'exit-code'. Feb 20 03:55:54 localhost podman[105287]: 2026-02-20 08:55:54.657840246 +0000 UTC m=+0.291077846 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd) Feb 20 03:55:54 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:55:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10377 DF PROTO=TCP SPT=55636 DPT=9100 SEQ=3421364201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE6ED430000000001030307) Feb 20 03:55:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56133 DF PROTO=TCP SPT=51720 DPT=9105 SEQ=2428621512 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE6EE610000000001030307) Feb 20 03:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:55:58 localhost podman[105385]: 2026-02-20 08:55:58.466074123 +0000 UTC m=+0.099470895 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, distribution-scope=public, build-date=2026-01-12T22:56:19Z, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, io.openshift.expose-services=, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:55:58 localhost podman[105386]: 2026-02-20 08:55:58.477682342 +0000 UTC m=+0.107449087 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, container_name=iscsid, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, batch=17.1_20260112.1) Feb 20 03:55:58 localhost podman[105385]: 2026-02-20 08:55:58.503984285 +0000 UTC m=+0.137381097 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, release=1766032510, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, version=17.1.13) Feb 20 03:55:58 localhost podman[105385]: unhealthy Feb 20 03:55:58 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:55:58 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:55:58 localhost podman[105387]: 2026-02-20 08:55:58.520180636 +0000 UTC m=+0.146324425 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, url=https://www.redhat.com, container_name=ovn_controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Feb 20 03:55:58 localhost podman[105393]: 2026-02-20 08:55:58.607131735 +0000 UTC m=+0.229638367 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:55:58 localhost podman[105399]: 2026-02-20 08:55:58.613286341 +0000 UTC m=+0.230276716 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, container_name=nova_compute) Feb 20 03:55:58 localhost podman[105386]: 2026-02-20 08:55:58.614209825 +0000 UTC m=+0.243976530 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, container_name=iscsid, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5) Feb 20 03:55:58 localhost podman[105393]: 2026-02-20 08:55:58.645810128 +0000 UTC m=+0.268316760 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.buildah.version=1.41.5, url=https://www.redhat.com, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64) Feb 20 03:55:58 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:55:58 localhost podman[105387]: 2026-02-20 08:55:58.661483556 +0000 UTC m=+0.287627345 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:55:58 localhost podman[105399]: 2026-02-20 08:55:58.661900097 +0000 UTC m=+0.278890502 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:55:58 localhost podman[105399]: unhealthy Feb 20 03:55:58 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:55:58 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:55:58 localhost podman[105387]: unhealthy Feb 20 03:55:58 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:55:58 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:55:58 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:55:58 localhost sshd[105486]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:55:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10378 DF PROTO=TCP SPT=55636 DPT=9100 SEQ=3421364201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE6F14D0000000001030307) Feb 20 03:55:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56134 DF PROTO=TCP SPT=51720 DPT=9105 SEQ=2428621512 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE6F24D0000000001030307) Feb 20 03:56:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10379 DF PROTO=TCP SPT=55636 DPT=9100 SEQ=3421364201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE6F94D0000000001030307) Feb 20 03:56:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56135 DF PROTO=TCP SPT=51720 DPT=9105 SEQ=2428621512 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE6FA4D0000000001030307) Feb 20 03:56:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:56:03 localhost podman[105488]: 2026-02-20 08:56:03.436150965 +0000 UTC m=+0.075108085 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., container_name=nova_migration_target, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Feb 20 03:56:03 localhost podman[105488]: 2026-02-20 08:56:03.808507639 +0000 UTC m=+0.447464799 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, vcs-type=git) Feb 20 03:56:03 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:56:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10380 DF PROTO=TCP SPT=55636 DPT=9100 SEQ=3421364201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7090D0000000001030307) Feb 20 03:56:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56136 DF PROTO=TCP SPT=51720 DPT=9105 SEQ=2428621512 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE70A0D0000000001030307) Feb 20 03:56:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2863 DF PROTO=TCP SPT=41558 DPT=9101 SEQ=2276836052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE70BD80000000001030307) Feb 20 03:56:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2864 DF PROTO=TCP SPT=41558 DPT=9101 SEQ=2276836052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE70FCD0000000001030307) Feb 20 03:56:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2865 DF PROTO=TCP SPT=41558 DPT=9101 SEQ=2276836052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE717CE0000000001030307) Feb 20 03:56:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19549 DF PROTO=TCP SPT=52480 DPT=9102 SEQ=1166132358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7187D0000000001030307) Feb 20 03:56:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19550 DF PROTO=TCP SPT=52480 DPT=9102 SEQ=1166132358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE71C8E0000000001030307) Feb 20 03:56:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19551 DF PROTO=TCP SPT=52480 DPT=9102 SEQ=1166132358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7248E0000000001030307) Feb 20 03:56:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2866 DF PROTO=TCP SPT=41558 DPT=9101 SEQ=2276836052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7278D0000000001030307) Feb 20 03:56:13 localhost sshd[105512]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:56:13 localhost systemd-logind[760]: New session 35 of user zuul. Feb 20 03:56:13 localhost systemd[1]: Started Session 35 of User zuul. Feb 20 03:56:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10381 DF PROTO=TCP SPT=55636 DPT=9100 SEQ=3421364201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE72A0D0000000001030307) Feb 20 03:56:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56137 DF PROTO=TCP SPT=51720 DPT=9105 SEQ=2428621512 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE72A0D0000000001030307) Feb 20 03:56:13 localhost python3.9[105607]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 03:56:14 localhost python3.9[105701]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 03:56:15 localhost python3.9[105794]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 03:56:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19552 DF PROTO=TCP SPT=52480 DPT=9102 SEQ=1166132358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7344D0000000001030307) Feb 20 03:56:16 localhost python3.9[105888]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 03:56:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58397 DF PROTO=TCP SPT=49154 DPT=9882 SEQ=3451874296 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE734900000000001030307) Feb 20 03:56:16 localhost python3.9[105981]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 03:56:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58398 DF PROTO=TCP SPT=49154 DPT=9882 SEQ=3451874296 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7388D0000000001030307) Feb 20 03:56:17 localhost python3.9[106072]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Feb 20 03:56:19 localhost python3.9[106162]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 03:56:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58399 DF PROTO=TCP SPT=49154 DPT=9882 SEQ=3451874296 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7408D0000000001030307) Feb 20 03:56:19 localhost python3.9[106254]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Feb 20 03:56:20 localhost python3.9[106344]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 03:56:21 localhost sshd[106349]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:56:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2867 DF PROTO=TCP SPT=41558 DPT=9101 SEQ=2276836052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7480D0000000001030307) Feb 20 03:56:21 localhost python3.9[106393]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 03:56:22 localhost systemd[1]: session-35.scope: Deactivated successfully. Feb 20 03:56:22 localhost systemd[1]: session-35.scope: Consumed 4.784s CPU time. Feb 20 03:56:22 localhost systemd-logind[760]: Session 35 logged out. Waiting for processes to exit. Feb 20 03:56:22 localhost systemd-logind[760]: Removed session 35. Feb 20 03:56:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58400 DF PROTO=TCP SPT=49154 DPT=9882 SEQ=3451874296 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7504E0000000001030307) Feb 20 03:56:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:56:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:56:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:56:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:56:26 localhost systemd[1]: tmp-crun.rjpSKE.mount: Deactivated successfully. Feb 20 03:56:26 localhost podman[106412]: 2026-02-20 08:56:26.356442798 +0000 UTC m=+0.990383263 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:56:26 localhost podman[106412]: 2026-02-20 08:56:26.369860066 +0000 UTC m=+1.003800541 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, version=17.1.13, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:56:26 localhost systemd[1]: tmp-crun.cTOrOi.mount: Deactivated successfully. Feb 20 03:56:26 localhost podman[106410]: 2026-02-20 08:56:26.378939378 +0000 UTC m=+1.015215274 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Feb 20 03:56:26 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:56:26 localhost podman[106413]: 2026-02-20 08:56:26.42360959 +0000 UTC m=+1.055378427 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, architecture=x86_64, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:56:26 localhost podman[106413]: 2026-02-20 08:56:26.45321332 +0000 UTC m=+1.084982217 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, batch=17.1_20260112.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:56:26 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:56:26 localhost podman[106411]: 2026-02-20 08:56:26.501283312 +0000 UTC m=+1.138747700 container health_status 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:56:26 localhost podman[106411]: 2026-02-20 08:56:26.531142358 +0000 UTC m=+1.168606836 container exec_died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, url=https://www.redhat.com, release=1766032510, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, container_name=ceilometer_agent_compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.buildah.version=1.41.5, architecture=x86_64) Feb 20 03:56:26 localhost podman[106411]: unhealthy Feb 20 03:56:26 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:56:26 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Failed with result 'exit-code'. Feb 20 03:56:26 localhost podman[106410]: 2026-02-20 08:56:26.609803897 +0000 UTC m=+1.246079843 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.13, architecture=x86_64, vcs-type=git, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 03:56:26 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:56:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=78 DF PROTO=TCP SPT=60568 DPT=9100 SEQ=3460179968 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE762740000000001030307) Feb 20 03:56:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28912 DF PROTO=TCP SPT=45008 DPT=9105 SEQ=2221543692 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE763910000000001030307) Feb 20 03:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:56:29 localhost systemd[1]: tmp-crun.Fh8lrr.mount: Deactivated successfully. Feb 20 03:56:29 localhost podman[106515]: 2026-02-20 08:56:29.431130806 +0000 UTC m=+0.060002012 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Feb 20 03:56:29 localhost podman[106521]: 2026-02-20 08:56:29.485155157 +0000 UTC m=+0.103798080 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step5, release=1766032510, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Feb 20 03:56:29 localhost podman[106508]: 2026-02-20 08:56:29.457976361 +0000 UTC m=+0.089488768 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, container_name=iscsid, io.openshift.expose-services=, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git) Feb 20 03:56:29 localhost podman[106521]: 2026-02-20 08:56:29.529175482 +0000 UTC m=+0.147818375 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, version=17.1.13, io.openshift.expose-services=) Feb 20 03:56:29 localhost podman[106521]: unhealthy Feb 20 03:56:29 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:56:29 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:56:29 localhost podman[106507]: 2026-02-20 08:56:29.539730602 +0000 UTC m=+0.176169711 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, version=17.1.13, vcs-type=git, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:56:29 localhost podman[106508]: 2026-02-20 08:56:29.540533944 +0000 UTC m=+0.172046411 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, release=1766032510, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team) Feb 20 03:56:29 localhost podman[106515]: 2026-02-20 08:56:29.566089096 +0000 UTC m=+0.194960352 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, container_name=collectd, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true) Feb 20 03:56:29 localhost podman[106507]: 2026-02-20 08:56:29.575519497 +0000 UTC m=+0.211958576 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.buildah.version=1.41.5, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:56:29 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:56:29 localhost podman[106507]: unhealthy Feb 20 03:56:29 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:56:29 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:56:29 localhost podman[106509]: 2026-02-20 08:56:29.614844737 +0000 UTC m=+0.242864291 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:56:29 localhost podman[106509]: 2026-02-20 08:56:29.633777922 +0000 UTC m=+0.261797486 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git) Feb 20 03:56:29 localhost podman[106509]: unhealthy Feb 20 03:56:29 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:56:29 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:56:29 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:56:30 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:56:30 localhost recover_tripleo_nova_virtqemud[106604]: 63703 Feb 20 03:56:30 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:56:30 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:56:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=80 DF PROTO=TCP SPT=60568 DPT=9100 SEQ=3460179968 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE76E8D0000000001030307) Feb 20 03:56:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:56:34 localhost podman[106605]: 2026-02-20 08:56:34.441191345 +0000 UTC m=+0.079636565 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:56:34 localhost podman[106605]: 2026-02-20 08:56:34.814677379 +0000 UTC m=+0.453122539 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:56:34 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:56:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=81 DF PROTO=TCP SPT=60568 DPT=9100 SEQ=3460179968 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE77E4D0000000001030307) Feb 20 03:56:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2868 DF PROTO=TCP SPT=41558 DPT=9101 SEQ=2276836052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7880E0000000001030307) Feb 20 03:56:39 localhost sshd[106629]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:56:39 localhost systemd-logind[760]: New session 36 of user zuul. Feb 20 03:56:39 localhost systemd[1]: Started Session 36 of User zuul. Feb 20 03:56:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6315 DF PROTO=TCP SPT=59052 DPT=9102 SEQ=284783856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE791CD0000000001030307) Feb 20 03:56:40 localhost python3.9[106724]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 03:56:40 localhost systemd[1]: Reloading. Feb 20 03:56:40 localhost systemd-rc-local-generator[106742]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:56:40 localhost systemd-sysv-generator[106752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:56:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:56:41 localhost python3.9[106850]: ansible-ansible.builtin.service_facts Invoked Feb 20 03:56:41 localhost network[106867]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 03:56:41 localhost network[106868]: 'network-scripts' will be removed from distribution in near future. Feb 20 03:56:41 localhost network[106869]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 03:56:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=82 DF PROTO=TCP SPT=60568 DPT=9100 SEQ=3460179968 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE79E0D0000000001030307) Feb 20 03:56:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:56:44 localhost sshd[106966]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:56:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6317 DF PROTO=TCP SPT=59052 DPT=9102 SEQ=284783856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7A98D0000000001030307) Feb 20 03:56:47 localhost sshd[107070]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:56:48 localhost python3.9[107147]: ansible-ansible.builtin.service_facts Invoked Feb 20 03:56:48 localhost network[107164]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 03:56:48 localhost network[107165]: 'network-scripts' will be removed from distribution in near future. Feb 20 03:56:48 localhost network[107166]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 03:56:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40514 DF PROTO=TCP SPT=59536 DPT=9882 SEQ=2076538764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7B5CD0000000001030307) Feb 20 03:56:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:56:52 localhost python3.9[107366]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:56:52 localhost systemd[1]: Reloading. Feb 20 03:56:52 localhost systemd-sysv-generator[107397]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:56:52 localhost systemd-rc-local-generator[107390]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:56:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:56:52 localhost systemd[1]: Stopping ceilometer_agent_compute container... Feb 20 03:56:52 localhost systemd[1]: tmp-crun.LbPU8x.mount: Deactivated successfully. Feb 20 03:56:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40515 DF PROTO=TCP SPT=59536 DPT=9882 SEQ=2076538764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7C58E0000000001030307) Feb 20 03:56:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:56:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:56:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:56:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:56:56 localhost podman[107423]: Error: container 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 is not running Feb 20 03:56:56 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Main process exited, code=exited, status=125/n/a Feb 20 03:56:56 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Failed with result 'exit-code'. Feb 20 03:56:57 localhost podman[107430]: 2026-02-20 08:56:57.024528228 +0000 UTC m=+0.147064665 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:56:57 localhost podman[107430]: 2026-02-20 08:56:57.089727667 +0000 UTC m=+0.212264094 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:56:57 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:56:57 localhost podman[107424]: 2026-02-20 08:56:57.10446483 +0000 UTC m=+0.232159094 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.) Feb 20 03:56:57 localhost podman[107424]: 2026-02-20 08:56:57.138750555 +0000 UTC m=+0.266444769 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, container_name=logrotate_crond, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, release=1766032510, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:56:57 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:56:57 localhost podman[107422]: 2026-02-20 08:56:57.217583898 +0000 UTC m=+0.353557713 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13) Feb 20 03:56:57 localhost podman[107422]: 2026-02-20 08:56:57.401625298 +0000 UTC m=+0.537599143 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Feb 20 03:56:57 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:56:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2666 DF PROTO=TCP SPT=54256 DPT=9100 SEQ=1627644873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7D7A30000000001030307) Feb 20 03:56:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47540 DF PROTO=TCP SPT=59596 DPT=9105 SEQ=3841464930 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7D8C10000000001030307) Feb 20 03:56:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:56:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:56:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:56:59 localhost systemd[1]: tmp-crun.6PH1Ea.mount: Deactivated successfully. Feb 20 03:56:59 localhost podman[107512]: 2026-02-20 08:56:59.713427742 +0000 UTC m=+0.094351927 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, distribution-scope=public) Feb 20 03:56:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:56:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:56:59 localhost podman[107511]: 2026-02-20 08:56:59.767954127 +0000 UTC m=+0.150904107 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5) Feb 20 03:56:59 localhost podman[107511]: 2026-02-20 08:56:59.783788769 +0000 UTC m=+0.166738749 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 20 03:56:59 localhost podman[107511]: unhealthy Feb 20 03:56:59 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:56:59 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:56:59 localhost podman[107558]: 2026-02-20 08:56:59.829014876 +0000 UTC m=+0.087966688 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:56:59 localhost podman[107512]: 2026-02-20 08:56:59.835875619 +0000 UTC m=+0.216799744 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, release=1766032510, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:56:59 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:56:59 localhost podman[107558]: 2026-02-20 08:56:59.862244552 +0000 UTC m=+0.121196384 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc.) Feb 20 03:56:59 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:56:59 localhost podman[107559]: 2026-02-20 08:56:59.882651747 +0000 UTC m=+0.136868252 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13) Feb 20 03:56:59 localhost podman[107513]: 2026-02-20 08:56:59.735556843 +0000 UTC m=+0.111391033 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, build-date=2026-01-12T23:32:04Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, version=17.1.13, config_id=tripleo_step5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Feb 20 03:56:59 localhost sshd[107607]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:56:59 localhost podman[107513]: 2026-02-20 08:56:59.920796004 +0000 UTC m=+0.296630184 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, version=17.1.13, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z) Feb 20 03:56:59 localhost podman[107513]: unhealthy Feb 20 03:56:59 localhost podman[107559]: 2026-02-20 08:56:59.929845066 +0000 UTC m=+0.184061582 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c) Feb 20 03:56:59 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:56:59 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:56:59 localhost podman[107559]: unhealthy Feb 20 03:56:59 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:56:59 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:57:00 localhost systemd[1]: tmp-crun.XVYU8y.mount: Deactivated successfully. Feb 20 03:57:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2668 DF PROTO=TCP SPT=54256 DPT=9100 SEQ=1627644873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7E38D0000000001030307) Feb 20 03:57:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:57:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2669 DF PROTO=TCP SPT=54256 DPT=9100 SEQ=1627644873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7F34D0000000001030307) Feb 20 03:57:04 localhost podman[107611]: 2026-02-20 08:57:04.948804662 +0000 UTC m=+0.081342821 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20260112.1, release=1766032510, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc.) Feb 20 03:57:05 localhost podman[107611]: 2026-02-20 08:57:05.326837507 +0000 UTC m=+0.459375626 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:57:05 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:57:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3903 DF PROTO=TCP SPT=49258 DPT=9101 SEQ=273746524 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE7FE0D0000000001030307) Feb 20 03:57:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43204 DF PROTO=TCP SPT=41094 DPT=9102 SEQ=2104415393 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE806CD0000000001030307) Feb 20 03:57:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2670 DF PROTO=TCP SPT=54256 DPT=9100 SEQ=1627644873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8140D0000000001030307) Feb 20 03:57:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43206 DF PROTO=TCP SPT=41094 DPT=9102 SEQ=2104415393 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE81E8D0000000001030307) Feb 20 03:57:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31797 DF PROTO=TCP SPT=39218 DPT=9882 SEQ=892474076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE82B0D0000000001030307) Feb 20 03:57:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31798 DF PROTO=TCP SPT=39218 DPT=9882 SEQ=892474076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE83ACD0000000001030307) Feb 20 03:57:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:57:27 localhost podman[107634]: Error: container 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 is not running Feb 20 03:57:27 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Main process exited, code=exited, status=125/n/a Feb 20 03:57:27 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Failed with result 'exit-code'. Feb 20 03:57:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:57:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:57:27 localhost podman[107647]: 2026-02-20 08:57:27.279004414 +0000 UTC m=+0.083648203 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, release=1766032510, com.redhat.component=openstack-cron-container, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, architecture=x86_64, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5) Feb 20 03:57:27 localhost systemd[1]: tmp-crun.B6YSRn.mount: Deactivated successfully. Feb 20 03:57:27 localhost podman[107648]: 2026-02-20 08:57:27.341842281 +0000 UTC m=+0.143216442 container health_status e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, release=1766032510, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Feb 20 03:57:27 localhost podman[107647]: 2026-02-20 08:57:27.361993448 +0000 UTC m=+0.166637237 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git) Feb 20 03:57:27 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:57:27 localhost podman[107648]: 2026-02-20 08:57:27.39503538 +0000 UTC m=+0.196409571 container exec_died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Feb 20 03:57:27 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Deactivated successfully. Feb 20 03:57:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58973 DF PROTO=TCP SPT=35014 DPT=9100 SEQ=2020832681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE84CD30000000001030307) Feb 20 03:57:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48117 DF PROTO=TCP SPT=45856 DPT=9105 SEQ=3441387409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE84DF30000000001030307) Feb 20 03:57:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:57:28 localhost podman[107693]: 2026-02-20 08:57:28.442772822 +0000 UTC m=+0.081881975 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step1, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, version=17.1.13) Feb 20 03:57:28 localhost podman[107693]: 2026-02-20 08:57:28.679776994 +0000 UTC m=+0.318886097 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.5, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com) Feb 20 03:57:28 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:57:30 localhost sshd[107722]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:57:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:57:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:57:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:57:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:57:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:57:30 localhost podman[107724]: 2026-02-20 08:57:30.447138855 +0000 UTC m=+0.087571347 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 20 03:57:30 localhost podman[107724]: 2026-02-20 08:57:30.488676873 +0000 UTC m=+0.129109335 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, release=1766032510, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 03:57:30 localhost podman[107724]: unhealthy Feb 20 03:57:30 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:57:30 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:57:30 localhost podman[107728]: 2026-02-20 08:57:30.50844686 +0000 UTC m=+0.140932870 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, batch=17.1_20260112.1, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, container_name=nova_compute, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:57:30 localhost podman[107726]: 2026-02-20 08:57:30.551219561 +0000 UTC m=+0.187583725 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, version=17.1.13, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com) Feb 20 03:57:30 localhost podman[107728]: 2026-02-20 08:57:30.55564586 +0000 UTC m=+0.188131860 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true) Feb 20 03:57:30 localhost podman[107728]: unhealthy Feb 20 03:57:30 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:57:30 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:57:30 localhost podman[107727]: 2026-02-20 08:57:30.613306468 +0000 UTC m=+0.246383214 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, version=17.1.13, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=1766032510, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z) Feb 20 03:57:30 localhost podman[107727]: 2026-02-20 08:57:30.621461485 +0000 UTC m=+0.254538271 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, architecture=x86_64, build-date=2026-01-12T22:10:15Z) Feb 20 03:57:30 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:57:30 localhost podman[107726]: 2026-02-20 08:57:30.635517971 +0000 UTC m=+0.271882145 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-type=git, batch=17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Feb 20 03:57:30 localhost podman[107726]: unhealthy Feb 20 03:57:30 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:57:30 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:57:30 localhost podman[107725]: 2026-02-20 08:57:30.707973753 +0000 UTC m=+0.345877128 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, version=17.1.13, name=rhosp-rhel9/openstack-iscsid) Feb 20 03:57:30 localhost podman[107725]: 2026-02-20 08:57:30.719151771 +0000 UTC m=+0.357055146 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., batch=17.1_20260112.1, tcib_managed=true, container_name=iscsid, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container) Feb 20 03:57:30 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:57:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58975 DF PROTO=TCP SPT=35014 DPT=9100 SEQ=2020832681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE858CE0000000001030307) Feb 20 03:57:34 localhost podman[107407]: time="2026-02-20T08:57:34Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL" Feb 20 03:57:34 localhost systemd[1]: tmp-crun.tGeIiy.mount: Deactivated successfully. Feb 20 03:57:34 localhost systemd[1]: libpod-8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.scope: Deactivated successfully. Feb 20 03:57:34 localhost systemd[1]: libpod-8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.scope: Consumed 5.639s CPU time. Feb 20 03:57:34 localhost podman[107407]: 2026-02-20 08:57:34.865113998 +0000 UTC m=+42.091660242 container died 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, release=1766032510, url=https://www.redhat.com) Feb 20 03:57:34 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.timer: Deactivated successfully. Feb 20 03:57:34 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2. Feb 20 03:57:34 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Failed to open /run/systemd/transient/8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: No such file or directory Feb 20 03:57:34 localhost systemd[1]: tmp-crun.j6kmxf.mount: Deactivated successfully. Feb 20 03:57:34 localhost podman[107407]: 2026-02-20 08:57:34.924937245 +0000 UTC m=+42.151483439 container cleanup 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, distribution-scope=public, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:57:34 localhost podman[107407]: ceilometer_agent_compute Feb 20 03:57:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58976 DF PROTO=TCP SPT=35014 DPT=9100 SEQ=2020832681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8688D0000000001030307) Feb 20 03:57:34 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.timer: Failed to open /run/systemd/transient/8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.timer: No such file or directory Feb 20 03:57:34 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Failed to open /run/systemd/transient/8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: No such file or directory Feb 20 03:57:34 localhost podman[107821]: 2026-02-20 08:57:34.951697828 +0000 UTC m=+0.069028211 container cleanup 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, version=17.1.13, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Feb 20 03:57:34 localhost systemd[1]: libpod-conmon-8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.scope: Deactivated successfully. Feb 20 03:57:35 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.timer: Failed to open /run/systemd/transient/8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.timer: No such file or directory Feb 20 03:57:35 localhost systemd[1]: 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: Failed to open /run/systemd/transient/8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2.service: No such file or directory Feb 20 03:57:35 localhost podman[107833]: 2026-02-20 08:57:35.047467523 +0000 UTC m=+0.067898062 container cleanup 8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1766032510, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z) Feb 20 03:57:35 localhost podman[107833]: ceilometer_agent_compute Feb 20 03:57:35 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully. Feb 20 03:57:35 localhost systemd[1]: Stopped ceilometer_agent_compute container. Feb 20 03:57:35 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.058s CPU time, no IO. Feb 20 03:57:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:57:35 localhost podman[107938]: 2026-02-20 08:57:35.577306489 +0000 UTC m=+0.091406590 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, batch=17.1_20260112.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:57:35 localhost python3.9[107937]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:57:35 localhost systemd[1]: var-lib-containers-storage-overlay-1f49cf732328da45c284af0adbec71fb49c656a125ef86cd2b520f315e6df9bd-merged.mount: Deactivated successfully. Feb 20 03:57:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a4c4d7817e25d416a93f7a94c8b3df033b3ff3196dd486681529ef5e63eeaa2-userdata-shm.mount: Deactivated successfully. Feb 20 03:57:35 localhost systemd[1]: Reloading. Feb 20 03:57:35 localhost podman[107938]: 2026-02-20 08:57:35.942026309 +0000 UTC m=+0.456126340 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step4, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13) Feb 20 03:57:35 localhost systemd-rc-local-generator[107985]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:57:35 localhost systemd-sysv-generator[107990]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:57:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:57:36 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:57:36 localhost systemd[1]: Stopping ceilometer_agent_ipmi container... Feb 20 03:57:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6207 DF PROTO=TCP SPT=57940 DPT=9101 SEQ=965167571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8720D0000000001030307) Feb 20 03:57:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3904 DF PROTO=TCP SPT=49258 DPT=9101 SEQ=273746524 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE87C0D0000000001030307) Feb 20 03:57:40 localhost sshd[108016]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:57:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58977 DF PROTO=TCP SPT=35014 DPT=9100 SEQ=2020832681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8880E0000000001030307) Feb 20 03:57:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42718 DF PROTO=TCP SPT=34544 DPT=9102 SEQ=2893241465 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE893CE0000000001030307) Feb 20 03:57:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19200 DF PROTO=TCP SPT=48518 DPT=9882 SEQ=1411565276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8A00D0000000001030307) Feb 20 03:57:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19201 DF PROTO=TCP SPT=48518 DPT=9882 SEQ=1411565276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8AFCD0000000001030307) Feb 20 03:57:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40454 DF PROTO=TCP SPT=48516 DPT=9100 SEQ=528486910 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8C2030000000001030307) Feb 20 03:57:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:57:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:57:57 localhost podman[108094]: 2026-02-20 08:57:57.963212608 +0000 UTC m=+0.093103136 container health_status df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public) Feb 20 03:57:57 localhost podman[108095]: Error: container e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 is not running Feb 20 03:57:57 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Main process exited, code=exited, status=125/n/a Feb 20 03:57:57 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Failed with result 'exit-code'. Feb 20 03:57:58 localhost podman[108094]: 2026-02-20 08:57:58.002860925 +0000 UTC m=+0.132751473 container exec_died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Feb 20 03:57:58 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Deactivated successfully. Feb 20 03:57:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47377 DF PROTO=TCP SPT=38200 DPT=9105 SEQ=3021014556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8C3220000000001030307) Feb 20 03:57:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:57:59 localhost podman[108125]: 2026-02-20 08:57:59.442296736 +0000 UTC m=+0.084305051 container health_status 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, version=17.1.13, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Feb 20 03:57:59 localhost podman[108125]: 2026-02-20 08:57:59.662319535 +0000 UTC m=+0.304327790 container exec_died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, container_name=metrics_qdr, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, version=17.1.13) Feb 20 03:57:59 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Deactivated successfully. Feb 20 03:58:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40456 DF PROTO=TCP SPT=48516 DPT=9100 SEQ=528486910 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8CE0D0000000001030307) Feb 20 03:58:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:58:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:58:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:58:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:58:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:58:01 localhost podman[108158]: 2026-02-20 08:58:01.452539495 +0000 UTC m=+0.071985262 container health_status ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, release=1766032510, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.) Feb 20 03:58:01 localhost podman[108158]: 2026-02-20 08:58:01.463842456 +0000 UTC m=+0.083288303 container exec_died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Feb 20 03:58:01 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Deactivated successfully. Feb 20 03:58:01 localhost podman[108155]: 2026-02-20 08:58:01.515597767 +0000 UTC m=+0.145782030 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 20 03:58:01 localhost podman[108155]: 2026-02-20 08:58:01.527054193 +0000 UTC m=+0.157238456 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:58:01 localhost podman[108155]: unhealthy Feb 20 03:58:01 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:58:01 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:58:01 localhost podman[108169]: 2026-02-20 08:58:01.557079073 +0000 UTC m=+0.170769897 container health_status d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T23:32:04Z, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute) Feb 20 03:58:01 localhost podman[108169]: 2026-02-20 08:58:01.576698557 +0000 UTC m=+0.190389391 container exec_died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, config_id=tripleo_step5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:58:01 localhost podman[108169]: unhealthy Feb 20 03:58:01 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:58:01 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:58:01 localhost podman[108156]: 2026-02-20 08:58:01.632187637 +0000 UTC m=+0.260373827 container health_status 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step3, architecture=x86_64, vendor=Red Hat, Inc., release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, container_name=iscsid) Feb 20 03:58:01 localhost podman[108156]: 2026-02-20 08:58:01.694816959 +0000 UTC m=+0.323003179 container exec_died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, container_name=iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:58:01 localhost podman[108157]: 2026-02-20 08:58:01.702209745 +0000 UTC m=+0.324917169 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 20 03:58:01 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Deactivated successfully. Feb 20 03:58:01 localhost podman[108157]: 2026-02-20 08:58:01.746533108 +0000 UTC m=+0.369240482 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, url=https://www.redhat.com, release=1766032510, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Feb 20 03:58:01 localhost podman[108157]: unhealthy Feb 20 03:58:01 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:58:01 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:58:04 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:58:04 localhost recover_tripleo_nova_virtqemud[108255]: 63703 Feb 20 03:58:04 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:58:04 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:58:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40457 DF PROTO=TCP SPT=48516 DPT=9100 SEQ=528486910 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8DDCD0000000001030307) Feb 20 03:58:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:58:06 localhost systemd[1]: tmp-crun.gP7qEy.mount: Deactivated successfully. Feb 20 03:58:06 localhost podman[108256]: 2026-02-20 08:58:06.449764763 +0000 UTC m=+0.089122899 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team) Feb 20 03:58:06 localhost podman[108256]: 2026-02-20 08:58:06.806763666 +0000 UTC m=+0.446121812 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, version=17.1.13, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git) Feb 20 03:58:06 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:58:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22132 DF PROTO=TCP SPT=47088 DPT=9101 SEQ=1798588607 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8E80F0000000001030307) Feb 20 03:58:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5185 DF PROTO=TCP SPT=42124 DPT=9102 SEQ=2342621429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8F14E0000000001030307) Feb 20 03:58:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40458 DF PROTO=TCP SPT=48516 DPT=9100 SEQ=528486910 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE8FE0E0000000001030307) Feb 20 03:58:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5187 DF PROTO=TCP SPT=42124 DPT=9102 SEQ=2342621429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9090D0000000001030307) Feb 20 03:58:18 localhost podman[108001]: time="2026-02-20T08:58:18Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL" Feb 20 03:58:18 localhost systemd[1]: libpod-e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.scope: Deactivated successfully. Feb 20 03:58:18 localhost systemd[1]: libpod-e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.scope: Consumed 6.274s CPU time. Feb 20 03:58:18 localhost podman[108001]: 2026-02-20 08:58:18.310902175 +0000 UTC m=+42.105427568 container died e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.buildah.version=1.41.5, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:58:18 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.timer: Deactivated successfully. Feb 20 03:58:18 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5. Feb 20 03:58:18 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Failed to open /run/systemd/transient/e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: No such file or directory Feb 20 03:58:18 localhost systemd[1]: tmp-crun.hFWgmD.mount: Deactivated successfully. Feb 20 03:58:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5-userdata-shm.mount: Deactivated successfully. Feb 20 03:58:18 localhost podman[108001]: 2026-02-20 08:58:18.437476771 +0000 UTC m=+42.232002144 container cleanup e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Feb 20 03:58:18 localhost podman[108001]: ceilometer_agent_ipmi Feb 20 03:58:18 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.timer: Failed to open /run/systemd/transient/e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.timer: No such file or directory Feb 20 03:58:18 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Failed to open /run/systemd/transient/e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: No such file or directory Feb 20 03:58:18 localhost podman[108281]: 2026-02-20 08:58:18.452857562 +0000 UTC m=+0.126173817 container cleanup e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, release=1766032510, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06) Feb 20 03:58:18 localhost systemd[1]: libpod-conmon-e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.scope: Deactivated successfully. Feb 20 03:58:18 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.timer: Failed to open /run/systemd/transient/e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.timer: No such file or directory Feb 20 03:58:18 localhost systemd[1]: e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: Failed to open /run/systemd/transient/e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5.service: No such file or directory Feb 20 03:58:18 localhost podman[108295]: 2026-02-20 08:58:18.557907404 +0000 UTC m=+0.069817403 container cleanup e263016110153c954be4a8dc6d3d7da3fade4f702b43e517578821f4e0e029d5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8cdce88e823976bbaa6aae3526d6d0ab'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi) Feb 20 03:58:18 localhost podman[108295]: ceilometer_agent_ipmi Feb 20 03:58:18 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully. Feb 20 03:58:18 localhost systemd[1]: Stopped ceilometer_agent_ipmi container. Feb 20 03:58:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:d7:b4:4a MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=38494 SEQ=1127653612 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Feb 20 03:58:19 localhost systemd[1]: var-lib-containers-storage-overlay-635be08eaf6b0b44e0a79953079e91b6976bdd54582ce4f8e556143cd0e1a390-merged.mount: Deactivated successfully. Feb 20 03:58:19 localhost python3.9[108399]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:58:19 localhost systemd[1]: Reloading. Feb 20 03:58:19 localhost systemd-rc-local-generator[108424]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:58:19 localhost systemd-sysv-generator[108428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:58:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:58:19 localhost systemd[1]: Stopping collectd container... Feb 20 03:58:20 localhost sshd[108452]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:58:21 localhost systemd[1]: libpod-ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.scope: Deactivated successfully. Feb 20 03:58:21 localhost systemd[1]: libpod-ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.scope: Consumed 1.933s CPU time. Feb 20 03:58:21 localhost podman[108440]: 2026-02-20 08:58:21.176188986 +0000 UTC m=+1.406066883 container died ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step3, release=1766032510, distribution-scope=public, vcs-type=git, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true) Feb 20 03:58:21 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.timer: Deactivated successfully. Feb 20 03:58:21 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7. Feb 20 03:58:21 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Failed to open /run/systemd/transient/ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: No such file or directory Feb 20 03:58:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7-userdata-shm.mount: Deactivated successfully. Feb 20 03:58:21 localhost systemd[1]: var-lib-containers-storage-overlay-b1cf061bfb4d6aba3459a5aa3c06a6d132a0f8d24850ac6b70fc836ee5031ed8-merged.mount: Deactivated successfully. Feb 20 03:58:21 localhost podman[108440]: 2026-02-20 08:58:21.236087334 +0000 UTC m=+1.465965211 container cleanup ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, build-date=2026-01-12T22:10:15Z, container_name=collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com) Feb 20 03:58:21 localhost podman[108440]: collectd Feb 20 03:58:21 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.timer: Failed to open /run/systemd/transient/ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.timer: No such file or directory Feb 20 03:58:21 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Failed to open /run/systemd/transient/ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: No such file or directory Feb 20 03:58:21 localhost podman[108454]: 2026-02-20 08:58:21.283503039 +0000 UTC m=+0.097442281 container cleanup ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, release=1766032510, batch=17.1_20260112.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:58:21 localhost systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:58:21 localhost systemd[1]: libpod-conmon-ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.scope: Deactivated successfully. Feb 20 03:58:21 localhost sshd[108476]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:58:21 localhost podman[108486]: error opening file `/run/crun/ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7/status`: No such file or directory Feb 20 03:58:21 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.timer: Failed to open /run/systemd/transient/ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.timer: No such file or directory Feb 20 03:58:21 localhost systemd[1]: ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: Failed to open /run/systemd/transient/ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7.service: No such file or directory Feb 20 03:58:21 localhost podman[108470]: 2026-02-20 08:58:21.396604927 +0000 UTC m=+0.085294407 container cleanup ceedeef22fab742d6c781524d02e3416707ac846e41f9dc224e06948caa450d7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 03:58:21 localhost podman[108470]: collectd Feb 20 03:58:21 localhost systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'. Feb 20 03:58:21 localhost systemd[1]: Stopped collectd container. Feb 20 03:58:22 localhost python3.9[108579]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:58:22 localhost systemd[1]: Reloading. Feb 20 03:58:22 localhost systemd-sysv-generator[108611]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:58:22 localhost systemd-rc-local-generator[108606]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:58:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:58:22 localhost systemd[1]: Stopping iscsid container... Feb 20 03:58:22 localhost systemd[1]: libpod-47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.scope: Deactivated successfully. Feb 20 03:58:22 localhost systemd[1]: libpod-47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.scope: Consumed 1.053s CPU time. Feb 20 03:58:22 localhost podman[108620]: 2026-02-20 08:58:22.66068865 +0000 UTC m=+0.077468608 container died 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., container_name=iscsid, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible) Feb 20 03:58:22 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.timer: Deactivated successfully. Feb 20 03:58:22 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b. Feb 20 03:58:22 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Failed to open /run/systemd/transient/47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: No such file or directory Feb 20 03:58:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b-userdata-shm.mount: Deactivated successfully. Feb 20 03:58:22 localhost podman[108620]: 2026-02-20 08:58:22.71581699 +0000 UTC m=+0.132596918 container cleanup 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20260112.1, container_name=iscsid, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc.) Feb 20 03:58:22 localhost podman[108620]: iscsid Feb 20 03:58:22 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.timer: Failed to open /run/systemd/transient/47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.timer: No such file or directory Feb 20 03:58:22 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Failed to open /run/systemd/transient/47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: No such file or directory Feb 20 03:58:22 localhost podman[108633]: 2026-02-20 08:58:22.746856348 +0000 UTC m=+0.072752771 container cleanup 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, release=1766032510, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=, version=17.1.13) Feb 20 03:58:22 localhost systemd[1]: libpod-conmon-47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.scope: Deactivated successfully. Feb 20 03:58:22 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.timer: Failed to open /run/systemd/transient/47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.timer: No such file or directory Feb 20 03:58:22 localhost systemd[1]: 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: Failed to open /run/systemd/transient/47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b.service: No such file or directory Feb 20 03:58:22 localhost podman[108648]: 2026-02-20 08:58:22.848763827 +0000 UTC m=+0.071952380 container cleanup 47cb53cfe92891f600d1006ed20a3c97c5f6f650ebde1107967053e2ff26681b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, distribution-scope=public, vcs-type=git, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid) Feb 20 03:58:22 localhost podman[108648]: iscsid Feb 20 03:58:22 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully. Feb 20 03:58:22 localhost systemd[1]: Stopped iscsid container. Feb 20 03:58:23 localhost systemd[1]: var-lib-containers-storage-overlay-87199cb72dc7cf4a94d875d451b802efef1cfd046c063c517c6aeb21861ed940-merged.mount: Deactivated successfully. Feb 20 03:58:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40083 DF PROTO=TCP SPT=58658 DPT=9882 SEQ=3541965097 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9250D0000000001030307) Feb 20 03:58:23 localhost python3.9[108752]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:58:23 localhost systemd[1]: Reloading. Feb 20 03:58:23 localhost systemd-sysv-generator[108780]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:58:23 localhost systemd-rc-local-generator[108775]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:58:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:58:24 localhost systemd[1]: Stopping logrotate_crond container... Feb 20 03:58:24 localhost systemd[1]: libpod-df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.scope: Deactivated successfully. Feb 20 03:58:24 localhost podman[108793]: 2026-02-20 08:58:24.158077757 +0000 UTC m=+0.085452430 container died df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step4) Feb 20 03:58:24 localhost systemd[1]: tmp-crun.c0Shvo.mount: Deactivated successfully. Feb 20 03:58:24 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.timer: Deactivated successfully. Feb 20 03:58:24 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5. Feb 20 03:58:24 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Failed to open /run/systemd/transient/df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: No such file or directory Feb 20 03:58:24 localhost systemd[1]: tmp-crun.aPoMlH.mount: Deactivated successfully. Feb 20 03:58:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5-userdata-shm.mount: Deactivated successfully. Feb 20 03:58:24 localhost systemd[1]: var-lib-containers-storage-overlay-a37e8585e5e1c303b8917b31d26f97b6e36b93752231bf9ec3eb7d712b5a3738-merged.mount: Deactivated successfully. Feb 20 03:58:24 localhost podman[108793]: 2026-02-20 08:58:24.223637027 +0000 UTC m=+0.151011660 container cleanup df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, release=1766032510, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Feb 20 03:58:24 localhost podman[108793]: logrotate_crond Feb 20 03:58:24 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.timer: Failed to open /run/systemd/transient/df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.timer: No such file or directory Feb 20 03:58:24 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Failed to open /run/systemd/transient/df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: No such file or directory Feb 20 03:58:24 localhost podman[108807]: 2026-02-20 08:58:24.307958406 +0000 UTC m=+0.135274620 container cleanup df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1766032510, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git) Feb 20 03:58:24 localhost systemd[1]: libpod-conmon-df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.scope: Deactivated successfully. Feb 20 03:58:24 localhost podman[108834]: error opening file `/run/crun/df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5/status`: No such file or directory Feb 20 03:58:24 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.timer: Failed to open /run/systemd/transient/df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.timer: No such file or directory Feb 20 03:58:24 localhost systemd[1]: df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: Failed to open /run/systemd/transient/df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5.service: No such file or directory Feb 20 03:58:24 localhost podman[108823]: 2026-02-20 08:58:24.441608851 +0000 UTC m=+0.082993484 container cleanup df3582b2f2dd76137c4bb09ee4b311c03d4a2a5876a712b43c56d086fb62bce5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, version=17.1.13, container_name=logrotate_crond, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee) Feb 20 03:58:24 localhost podman[108823]: logrotate_crond Feb 20 03:58:24 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully. Feb 20 03:58:24 localhost systemd[1]: Stopped logrotate_crond container. Feb 20 03:58:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:d7:b4:4a MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=38494 SEQ=1127653612 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Feb 20 03:58:25 localhost python3.9[108929]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:58:25 localhost systemd[1]: Reloading. Feb 20 03:58:25 localhost systemd-sysv-generator[108962]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:58:25 localhost systemd-rc-local-generator[108956]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:58:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:58:25 localhost systemd[1]: Stopping metrics_qdr container... Feb 20 03:58:25 localhost kernel: qdrouterd[56313]: segfault at 0 ip 00007f6c1137a7cb sp 00007ffea986c290 error 4 in libc.so.6[7f6c11317000+175000] Feb 20 03:58:25 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9 Feb 20 03:58:25 localhost systemd[1]: Created slice Slice /system/systemd-coredump. Feb 20 03:58:25 localhost systemd[1]: Started Process Core Dump (PID 108983/UID 0). Feb 20 03:58:25 localhost systemd-coredump[108984]: Resource limits disable core dumping for process 56313 (qdrouterd). Feb 20 03:58:25 localhost systemd-coredump[108984]: Process 56313 (qdrouterd) of user 42465 dumped core. Feb 20 03:58:25 localhost systemd[1]: systemd-coredump@0-108983-0.service: Deactivated successfully. Feb 20 03:58:25 localhost podman[108970]: 2026-02-20 08:58:25.854512366 +0000 UTC m=+0.236221243 container died 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step1, io.openshift.expose-services=, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, build-date=2026-01-12T22:10:14Z, distribution-scope=public, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, tcib_managed=true) Feb 20 03:58:25 localhost systemd[1]: libpod-6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.scope: Deactivated successfully. Feb 20 03:58:25 localhost systemd[1]: libpod-6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.scope: Consumed 28.045s CPU time. Feb 20 03:58:25 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.timer: Deactivated successfully. Feb 20 03:58:25 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb. Feb 20 03:58:25 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Failed to open /run/systemd/transient/6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: No such file or directory Feb 20 03:58:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb-userdata-shm.mount: Deactivated successfully. Feb 20 03:58:25 localhost podman[108970]: 2026-02-20 08:58:25.908972418 +0000 UTC m=+0.290681315 container cleanup 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git) Feb 20 03:58:25 localhost podman[108970]: metrics_qdr Feb 20 03:58:25 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.timer: Failed to open /run/systemd/transient/6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.timer: No such file or directory Feb 20 03:58:25 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Failed to open /run/systemd/transient/6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: No such file or directory Feb 20 03:58:25 localhost podman[108988]: 2026-02-20 08:58:25.922807127 +0000 UTC m=+0.061002728 container cleanup 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 03:58:25 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a Feb 20 03:58:25 localhost systemd[1]: libpod-conmon-6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.scope: Deactivated successfully. Feb 20 03:58:26 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.timer: Failed to open /run/systemd/transient/6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.timer: No such file or directory Feb 20 03:58:26 localhost systemd[1]: 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: Failed to open /run/systemd/transient/6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb.service: No such file or directory Feb 20 03:58:26 localhost podman[109002]: 2026-02-20 08:58:26.015463089 +0000 UTC m=+0.062672993 container cleanup 6c18e2146c811bfccd3ea2436e43a62ac059d8c8f0e3c30bc12998ff7ff912bb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1766032510, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5f5f3be2be3c6541e811126095b44bf3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Feb 20 03:58:26 localhost podman[109002]: metrics_qdr Feb 20 03:58:26 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'. Feb 20 03:58:26 localhost systemd[1]: Stopped metrics_qdr container. Feb 20 03:58:26 localhost systemd[1]: var-lib-containers-storage-overlay-2f91179ec110573ad291e73cde7d5952eb6aeb23b95367e17a51a5370c062b6c-merged.mount: Deactivated successfully. Feb 20 03:58:26 localhost python3.9[109107]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:58:27 localhost python3.9[109200]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:58:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48864 DF PROTO=TCP SPT=60562 DPT=9100 SEQ=2523199629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE937340000000001030307) Feb 20 03:58:28 localhost python3.9[109293]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:58:28 localhost python3.9[109386]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:58:28 localhost systemd[1]: Reloading. Feb 20 03:58:29 localhost systemd-sysv-generator[109414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:58:29 localhost systemd-rc-local-generator[109410]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:58:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:58:29 localhost systemd[1]: Stopping nova_compute container... Feb 20 03:58:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48866 DF PROTO=TCP SPT=60562 DPT=9100 SEQ=2523199629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9434E0000000001030307) Feb 20 03:58:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:58:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:58:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:58:31 localhost podman[109443]: Error: container d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 is not running Feb 20 03:58:31 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=125/n/a Feb 20 03:58:31 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:58:32 localhost podman[109442]: 2026-02-20 08:58:32.027312974 +0000 UTC m=+0.159830555 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, architecture=x86_64) Feb 20 03:58:32 localhost podman[109441]: 2026-02-20 08:58:32.078053888 +0000 UTC m=+0.213332103 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, distribution-scope=public, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible) Feb 20 03:58:32 localhost podman[109441]: 2026-02-20 08:58:32.09572468 +0000 UTC m=+0.231002835 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step4, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Feb 20 03:58:32 localhost podman[109441]: unhealthy Feb 20 03:58:32 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:58:32 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:58:32 localhost podman[109442]: 2026-02-20 08:58:32.130835266 +0000 UTC m=+0.263352947 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Feb 20 03:58:32 localhost podman[109442]: unhealthy Feb 20 03:58:32 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:58:32 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:58:33 localhost sshd[109494]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:58:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48867 DF PROTO=TCP SPT=60562 DPT=9100 SEQ=2523199629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9530D0000000001030307) Feb 20 03:58:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:58:37 localhost podman[109496]: 2026-02-20 08:58:37.182449515 +0000 UTC m=+0.072027583 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, batch=17.1_20260112.1) Feb 20 03:58:37 localhost sshd[109518]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:58:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14301 DF PROTO=TCP SPT=45184 DPT=9101 SEQ=3521039179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE95C0E0000000001030307) Feb 20 03:58:37 localhost podman[109496]: 2026-02-20 08:58:37.580772512 +0000 UTC m=+0.470350590 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, build-date=2026-01-12T23:32:04Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, vcs-type=git, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 03:58:37 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:58:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30116 DF PROTO=TCP SPT=47122 DPT=9102 SEQ=357476445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9668E0000000001030307) Feb 20 03:58:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42721 DF PROTO=TCP SPT=34544 DPT=9102 SEQ=2893241465 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9720D0000000001030307) Feb 20 03:58:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30118 DF PROTO=TCP SPT=47122 DPT=9102 SEQ=357476445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE97E4E0000000001030307) Feb 20 03:58:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12150 DF PROTO=TCP SPT=57716 DPT=9882 SEQ=2673844781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE98A8D0000000001030307) Feb 20 03:58:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12151 DF PROTO=TCP SPT=57716 DPT=9882 SEQ=2673844781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE99A4D0000000001030307) Feb 20 03:58:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49534 DF PROTO=TCP SPT=52058 DPT=9100 SEQ=4111442742 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9AC640000000001030307) Feb 20 03:58:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58809 DF PROTO=TCP SPT=39142 DPT=9105 SEQ=2652522652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9AD810000000001030307) Feb 20 03:59:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49536 DF PROTO=TCP SPT=52058 DPT=9100 SEQ=4111442742 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9B84D0000000001030307) Feb 20 03:59:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:59:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:59:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:59:02 localhost podman[109599]: 2026-02-20 08:59:02.463853367 +0000 UTC m=+0.094375619 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., tcib_managed=true, container_name=ovn_controller, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 03:59:02 localhost podman[109600]: Error: container d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 is not running Feb 20 03:59:02 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Main process exited, code=exited, status=125/n/a Feb 20 03:59:02 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed with result 'exit-code'. Feb 20 03:59:02 localhost podman[109599]: 2026-02-20 08:59:02.5033476 +0000 UTC m=+0.133869832 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:59:02 localhost podman[109599]: unhealthy Feb 20 03:59:02 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:59:02 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:59:02 localhost podman[109598]: 2026-02-20 08:59:02.55765987 +0000 UTC m=+0.188752957 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team) Feb 20 03:59:02 localhost podman[109598]: 2026-02-20 08:59:02.599793574 +0000 UTC m=+0.230886611 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.13, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, release=1766032510, io.buildah.version=1.41.5) Feb 20 03:59:02 localhost podman[109598]: unhealthy Feb 20 03:59:02 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:59:02 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:59:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49537 DF PROTO=TCP SPT=52058 DPT=9100 SEQ=4111442742 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9C80D0000000001030307) Feb 20 03:59:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53369 DF PROTO=TCP SPT=37256 DPT=9101 SEQ=202541359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9D20E0000000001030307) Feb 20 03:59:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:59:08 localhost systemd[1]: tmp-crun.hhhQSD.mount: Deactivated successfully. Feb 20 03:59:08 localhost podman[109650]: 2026-02-20 08:59:08.196890585 +0000 UTC m=+0.082197085 container health_status cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, batch=17.1_20260112.1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5) Feb 20 03:59:08 localhost podman[109650]: 2026-02-20 08:59:08.568215201 +0000 UTC m=+0.453521711 container exec_died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4) Feb 20 03:59:08 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Deactivated successfully. Feb 20 03:59:08 localhost sshd[109673]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:59:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26095 DF PROTO=TCP SPT=60410 DPT=9102 SEQ=1083570156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9DB8D0000000001030307) Feb 20 03:59:12 localhost podman[109427]: time="2026-02-20T08:59:12Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL" Feb 20 03:59:12 localhost systemd[1]: tmp-crun.vnFFlz.mount: Deactivated successfully. Feb 20 03:59:12 localhost systemd[1]: libpod-d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.scope: Deactivated successfully. Feb 20 03:59:12 localhost systemd[1]: libpod-d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.scope: Consumed 27.311s CPU time. Feb 20 03:59:12 localhost podman[109427]: 2026-02-20 08:59:12.147326266 +0000 UTC m=+42.835334672 container died d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, container_name=nova_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, version=17.1.13, architecture=x86_64, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:59:12 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.timer: Deactivated successfully. Feb 20 03:59:12 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628. Feb 20 03:59:12 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed to open /run/systemd/transient/d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: No such file or directory Feb 20 03:59:12 localhost systemd[1]: var-lib-containers-storage-overlay-a4e52a870f1c03a40a0dd0950b6a82c92b30d867e7a1a54958bb9761473a110d-merged.mount: Deactivated successfully. Feb 20 03:59:12 localhost podman[109427]: 2026-02-20 08:59:12.199769785 +0000 UTC m=+42.887778181 container cleanup d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, release=1766032510, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:59:12 localhost podman[109427]: nova_compute Feb 20 03:59:12 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.timer: Failed to open /run/systemd/transient/d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.timer: No such file or directory Feb 20 03:59:12 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed to open /run/systemd/transient/d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: No such file or directory Feb 20 03:59:12 localhost podman[109676]: 2026-02-20 08:59:12.223242241 +0000 UTC m=+0.067266606 container cleanup d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:59:12 localhost systemd[1]: libpod-conmon-d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.scope: Deactivated successfully. Feb 20 03:59:12 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.timer: Failed to open /run/systemd/transient/d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.timer: No such file or directory Feb 20 03:59:12 localhost systemd[1]: d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: Failed to open /run/systemd/transient/d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628.service: No such file or directory Feb 20 03:59:12 localhost podman[109687]: 2026-02-20 08:59:12.318510362 +0000 UTC m=+0.066664549 container cleanup d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_compute, release=1766032510) Feb 20 03:59:12 localhost podman[109687]: nova_compute Feb 20 03:59:12 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully. Feb 20 03:59:12 localhost systemd[1]: Stopped nova_compute container. Feb 20 03:59:12 localhost systemd[1]: tripleo_nova_compute.service: Consumed 1.111s CPU time, no IO. Feb 20 03:59:13 localhost python3.9[109791]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:59:13 localhost systemd[1]: Reloading. Feb 20 03:59:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5190 DF PROTO=TCP SPT=42124 DPT=9102 SEQ=2342621429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9E80D0000000001030307) Feb 20 03:59:13 localhost systemd-sysv-generator[109820]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:59:13 localhost systemd-rc-local-generator[109814]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:59:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:59:13 localhost systemd[1]: Stopping nova_migration_target container... Feb 20 03:59:13 localhost systemd[1]: libpod-cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.scope: Deactivated successfully. Feb 20 03:59:13 localhost systemd[1]: libpod-cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.scope: Consumed 34.386s CPU time. Feb 20 03:59:13 localhost podman[109832]: 2026-02-20 08:59:13.48719038 +0000 UTC m=+0.077506899 container died cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510) Feb 20 03:59:13 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.timer: Deactivated successfully. Feb 20 03:59:13 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018. Feb 20 03:59:13 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Failed to open /run/systemd/transient/cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: No such file or directory Feb 20 03:59:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018-userdata-shm.mount: Deactivated successfully. Feb 20 03:59:13 localhost systemd[1]: var-lib-containers-storage-overlay-46380c3a391236151e6a3f7a86710e438476b0174e52a2400f0bf907fb1c5d80-merged.mount: Deactivated successfully. Feb 20 03:59:13 localhost podman[109832]: 2026-02-20 08:59:13.534979525 +0000 UTC m=+0.125296014 container cleanup cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, config_id=tripleo_step4, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Feb 20 03:59:13 localhost podman[109832]: nova_migration_target Feb 20 03:59:13 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.timer: Failed to open /run/systemd/transient/cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.timer: No such file or directory Feb 20 03:59:13 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Failed to open /run/systemd/transient/cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: No such file or directory Feb 20 03:59:13 localhost podman[109846]: 2026-02-20 08:59:13.567263027 +0000 UTC m=+0.068144660 container cleanup cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com) Feb 20 03:59:13 localhost systemd[1]: libpod-conmon-cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.scope: Deactivated successfully. Feb 20 03:59:13 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.timer: Failed to open /run/systemd/transient/cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.timer: No such file or directory Feb 20 03:59:13 localhost systemd[1]: cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: Failed to open /run/systemd/transient/cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018.service: No such file or directory Feb 20 03:59:13 localhost podman[109858]: 2026-02-20 08:59:13.661730337 +0000 UTC m=+0.066667389 container cleanup cf1f6a7aac6639ade90ab03e19f16ee2dfa659641377392c1b7ecf460646c018 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, config_id=tripleo_step4) Feb 20 03:59:13 localhost podman[109858]: nova_migration_target Feb 20 03:59:13 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully. Feb 20 03:59:13 localhost systemd[1]: Stopped nova_migration_target container. Feb 20 03:59:14 localhost python3.9[109962]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 03:59:14 localhost systemd[1]: Reloading. Feb 20 03:59:14 localhost systemd-sysv-generator[109991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 03:59:14 localhost systemd-rc-local-generator[109988]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 03:59:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 03:59:14 localhost systemd[1]: Stopping nova_virtlogd_wrapper container... Feb 20 03:59:14 localhost systemd[1]: tmp-crun.WaS1Qj.mount: Deactivated successfully. Feb 20 03:59:14 localhost systemd[1]: libpod-e882aec8ac81713226433fb446c87955abd8068bc0df9ee27ea66e7dbdebfa2e.scope: Deactivated successfully. Feb 20 03:59:14 localhost podman[110003]: 2026-02-20 08:59:14.907367308 +0000 UTC m=+0.079627275 container died e882aec8ac81713226433fb446c87955abd8068bc0df9ee27ea66e7dbdebfa2e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, release=1766032510, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_virtlogd_wrapper) Feb 20 03:59:14 localhost podman[110003]: 2026-02-20 08:59:14.948742622 +0000 UTC m=+0.121002589 container cleanup e882aec8ac81713226433fb446c87955abd8068bc0df9ee27ea66e7dbdebfa2e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_virtlogd_wrapper, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64, config_id=tripleo_step3, vendor=Red Hat, Inc., version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container) Feb 20 03:59:14 localhost podman[110003]: nova_virtlogd_wrapper Feb 20 03:59:14 localhost podman[110018]: 2026-02-20 08:59:14.98240255 +0000 UTC m=+0.065105757 container cleanup e882aec8ac81713226433fb446c87955abd8068bc0df9ee27ea66e7dbdebfa2e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, container_name=nova_virtlogd_wrapper, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.buildah.version=1.41.5) Feb 20 03:59:15 localhost systemd[1]: var-lib-containers-storage-overlay-341ab3ab9b7393cc993bae384b639c3b914f3b8ebf8717f3f9f4cc78c6224645-merged.mount: Deactivated successfully. Feb 20 03:59:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e882aec8ac81713226433fb446c87955abd8068bc0df9ee27ea66e7dbdebfa2e-userdata-shm.mount: Deactivated successfully. Feb 20 03:59:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26097 DF PROTO=TCP SPT=60410 DPT=9102 SEQ=1083570156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9F34D0000000001030307) Feb 20 03:59:18 localhost sshd[110036]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:59:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62300 DF PROTO=TCP SPT=56180 DPT=9882 SEQ=3847453345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AE9FFCD0000000001030307) Feb 20 03:59:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62301 DF PROTO=TCP SPT=56180 DPT=9882 SEQ=3847453345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA0F8E0000000001030307) Feb 20 03:59:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55933 DF PROTO=TCP SPT=49286 DPT=9100 SEQ=1963328997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA21930000000001030307) Feb 20 03:59:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31849 DF PROTO=TCP SPT=50104 DPT=9105 SEQ=1667551992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA22B10000000001030307) Feb 20 03:59:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55935 DF PROTO=TCP SPT=49286 DPT=9100 SEQ=1963328997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA2D8E0000000001030307) Feb 20 03:59:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 03:59:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 03:59:32 localhost systemd[1]: tmp-crun.eMCyTw.mount: Deactivated successfully. Feb 20 03:59:32 localhost podman[110038]: 2026-02-20 08:59:32.961496979 +0000 UTC m=+0.102614089 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z) Feb 20 03:59:33 localhost podman[110038]: 2026-02-20 08:59:33.001495566 +0000 UTC m=+0.142612596 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 03:59:33 localhost podman[110038]: unhealthy Feb 20 03:59:33 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:59:33 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 03:59:33 localhost systemd[1]: tmp-crun.tjY5Al.mount: Deactivated successfully. Feb 20 03:59:33 localhost podman[110039]: 2026-02-20 08:59:33.058374023 +0000 UTC m=+0.194567091 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, version=17.1.13, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, config_id=tripleo_step4, architecture=x86_64, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 20 03:59:33 localhost podman[110039]: 2026-02-20 08:59:33.071481243 +0000 UTC m=+0.207674301 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, batch=17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true) Feb 20 03:59:33 localhost podman[110039]: unhealthy Feb 20 03:59:33 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 03:59:33 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 03:59:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55936 DF PROTO=TCP SPT=49286 DPT=9100 SEQ=1963328997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA3D4D0000000001030307) Feb 20 03:59:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20830 DF PROTO=TCP SPT=59752 DPT=9101 SEQ=573272328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA480D0000000001030307) Feb 20 03:59:39 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Feb 20 03:59:39 localhost recover_tripleo_nova_virtqemud[110080]: 63703 Feb 20 03:59:39 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Feb 20 03:59:39 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Feb 20 03:59:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42671 DF PROTO=TCP SPT=43184 DPT=9102 SEQ=3880303107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA50CE0000000001030307) Feb 20 03:59:40 localhost sshd[110081]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:59:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55937 DF PROTO=TCP SPT=49286 DPT=9100 SEQ=1963328997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA5E0E0000000001030307) Feb 20 03:59:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42673 DF PROTO=TCP SPT=43184 DPT=9102 SEQ=3880303107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA688D0000000001030307) Feb 20 03:59:46 localhost sshd[110083]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:59:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 03:59:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5073 writes, 22K keys, 5073 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5073 writes, 653 syncs, 7.77 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 03:59:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25602 DF PROTO=TCP SPT=59630 DPT=9882 SEQ=1308286487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA74CE0000000001030307) Feb 20 03:59:51 localhost sshd[110162]: main: sshd: ssh-rsa algorithm is disabled Feb 20 03:59:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 03:59:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5513 writes, 24K keys, 5513 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5513 writes, 750 syncs, 7.35 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 03:59:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25603 DF PROTO=TCP SPT=59630 DPT=9882 SEQ=1308286487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA848D0000000001030307) Feb 20 03:59:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36811 DF PROTO=TCP SPT=38706 DPT=9100 SEQ=207162474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA96C30000000001030307) Feb 20 03:59:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23750 DF PROTO=TCP SPT=52382 DPT=9105 SEQ=1171051423 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEA97E20000000001030307) Feb 20 04:00:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36813 DF PROTO=TCP SPT=38706 DPT=9100 SEQ=207162474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEAA2CE0000000001030307) Feb 20 04:00:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 04:00:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 04:00:03 localhost systemd[1]: tmp-crun.dDF2zw.mount: Deactivated successfully. Feb 20 04:00:03 localhost podman[110165]: 2026-02-20 09:00:03.457956452 +0000 UTC m=+0.085867462 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, managed_by=tripleo_ansible, container_name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, url=https://www.redhat.com) Feb 20 04:00:03 localhost podman[110165]: 2026-02-20 09:00:03.469786367 +0000 UTC m=+0.097697417 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20260112.1) Feb 20 04:00:03 localhost podman[110165]: unhealthy Feb 20 04:00:03 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 04:00:03 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 04:00:03 localhost podman[110164]: 2026-02-20 09:00:03.550413538 +0000 UTC m=+0.178511323 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Feb 20 04:00:03 localhost podman[110164]: 2026-02-20 09:00:03.592801319 +0000 UTC m=+0.220899094 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z) Feb 20 04:00:03 localhost podman[110164]: unhealthy Feb 20 04:00:03 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 04:00:03 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 04:00:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36814 DF PROTO=TCP SPT=38706 DPT=9100 SEQ=207162474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEAB28D0000000001030307) Feb 20 04:00:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5332 DF PROTO=TCP SPT=37828 DPT=9101 SEQ=3401477242 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEABC0D0000000001030307) Feb 20 04:00:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17379 DF PROTO=TCP SPT=49040 DPT=9102 SEQ=1086917298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEAC60D0000000001030307) Feb 20 04:00:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18996 DF PROTO=TCP SPT=35224 DPT=9101 SEQ=4103322221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEAD18E0000000001030307) Feb 20 04:00:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17381 DF PROTO=TCP SPT=49040 DPT=9102 SEQ=1086917298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEADDCD0000000001030307) Feb 20 04:00:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46475 DF PROTO=TCP SPT=47750 DPT=9882 SEQ=1953502734 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEAEA0D0000000001030307) Feb 20 04:00:22 localhost sshd[110204]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:00:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46476 DF PROTO=TCP SPT=47750 DPT=9882 SEQ=1953502734 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEAF9CD0000000001030307) Feb 20 04:00:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21002 DF PROTO=TCP SPT=34964 DPT=9100 SEQ=1350928582 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB0BF30000000001030307) Feb 20 04:00:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52567 DF PROTO=TCP SPT=39178 DPT=9105 SEQ=97326350 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB0D110000000001030307) Feb 20 04:00:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21004 DF PROTO=TCP SPT=34964 DPT=9100 SEQ=1350928582 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB180D0000000001030307) Feb 20 04:00:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 04:00:33 localhost podman[110206]: 2026-02-20 09:00:33.693061713 +0000 UTC m=+0.080766906 container health_status 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9) Feb 20 04:00:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 04:00:33 localhost podman[110206]: 2026-02-20 09:00:33.733558063 +0000 UTC m=+0.121263246 container exec_died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z) Feb 20 04:00:33 localhost podman[110206]: unhealthy Feb 20 04:00:33 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Main process exited, code=exited, status=1/FAILURE Feb 20 04:00:33 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed with result 'exit-code'. Feb 20 04:00:33 localhost podman[110226]: 2026-02-20 09:00:33.791457848 +0000 UTC m=+0.077780036 container health_status 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f) Feb 20 04:00:33 localhost podman[110226]: 2026-02-20 09:00:33.835832172 +0000 UTC m=+0.122154360 container exec_died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public) Feb 20 04:00:33 localhost podman[110226]: unhealthy Feb 20 04:00:33 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Main process exited, code=exited, status=1/FAILURE Feb 20 04:00:33 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed with result 'exit-code'. Feb 20 04:00:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21005 DF PROTO=TCP SPT=34964 DPT=9100 SEQ=1350928582 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB27CD0000000001030307) Feb 20 04:00:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18998 DF PROTO=TCP SPT=35224 DPT=9101 SEQ=4103322221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB320D0000000001030307) Feb 20 04:00:39 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing. Feb 20 04:00:39 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 62938 (conmon) with signal SIGKILL. Feb 20 04:00:39 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL Feb 20 04:00:39 localhost systemd[1]: libpod-conmon-e882aec8ac81713226433fb446c87955abd8068bc0df9ee27ea66e7dbdebfa2e.scope: Deactivated successfully. Feb 20 04:00:39 localhost podman[110259]: error opening file `/run/crun/e882aec8ac81713226433fb446c87955abd8068bc0df9ee27ea66e7dbdebfa2e/status`: No such file or directory Feb 20 04:00:39 localhost podman[110246]: 2026-02-20 09:00:39.176962653 +0000 UTC m=+0.064438611 container cleanup e882aec8ac81713226433fb446c87955abd8068bc0df9ee27ea66e7dbdebfa2e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtlogd_wrapper, build-date=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, vcs-type=git) Feb 20 04:00:39 localhost podman[110246]: nova_virtlogd_wrapper Feb 20 04:00:39 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'. Feb 20 04:00:39 localhost systemd[1]: Stopped nova_virtlogd_wrapper container. Feb 20 04:00:39 localhost python3.9[110352]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:00:39 localhost systemd[1]: Reloading. Feb 20 04:00:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6671 DF PROTO=TCP SPT=41266 DPT=9102 SEQ=1416401547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB3B4D0000000001030307) Feb 20 04:00:40 localhost systemd-sysv-generator[110386]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:00:40 localhost systemd-rc-local-generator[110382]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:00:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:00:40 localhost systemd[1]: Stopping nova_virtnodedevd container... Feb 20 04:00:40 localhost systemd[1]: tmp-crun.1S3ks7.mount: Deactivated successfully. Feb 20 04:00:40 localhost systemd[1]: libpod-9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69.scope: Deactivated successfully. Feb 20 04:00:40 localhost systemd[1]: libpod-9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69.scope: Consumed 1.383s CPU time. Feb 20 04:00:40 localhost podman[110393]: 2026-02-20 09:00:40.338706586 +0000 UTC m=+0.081124495 container died 9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.buildah.version=1.41.5, batch=17.1_20260112.1, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, container_name=nova_virtnodedevd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 04:00:40 localhost systemd[1]: tmp-crun.euQgDF.mount: Deactivated successfully. Feb 20 04:00:40 localhost podman[110393]: 2026-02-20 09:00:40.386253725 +0000 UTC m=+0.128671624 container cleanup 9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, container_name=nova_virtnodedevd, config_id=tripleo_step3, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 20 04:00:40 localhost podman[110393]: nova_virtnodedevd Feb 20 04:00:40 localhost podman[110407]: 2026-02-20 09:00:40.422963164 +0000 UTC m=+0.074328323 container cleanup 9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, io.buildah.version=1.41.5, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-libvirt, container_name=nova_virtnodedevd, io.openshift.expose-services=) Feb 20 04:00:40 localhost systemd[1]: libpod-conmon-9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69.scope: Deactivated successfully. Feb 20 04:00:40 localhost podman[110435]: error opening file `/run/crun/9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69/status`: No such file or directory Feb 20 04:00:40 localhost podman[110424]: 2026-02-20 09:00:40.523230539 +0000 UTC m=+0.070791809 container cleanup 9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.created=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, container_name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:31:49Z, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public) Feb 20 04:00:40 localhost podman[110424]: nova_virtnodedevd Feb 20 04:00:40 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully. Feb 20 04:00:40 localhost systemd[1]: Stopped nova_virtnodedevd container. Feb 20 04:00:40 localhost sshd[110460]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:00:41 localhost python3.9[110530]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:00:41 localhost systemd[1]: var-lib-containers-storage-overlay-9e6d93ebb3c8cd5a1b45cdafd934ae5970ce3fd4a24326de36bd3ca078a4ea52-merged.mount: Deactivated successfully. Feb 20 04:00:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9763b9177aec190d0435bd854c04dfec35cfa0536c0811dafb312c6fb09b2e69-userdata-shm.mount: Deactivated successfully. Feb 20 04:00:41 localhost systemd[1]: Reloading. Feb 20 04:00:41 localhost systemd-sysv-generator[110561]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:00:41 localhost systemd-rc-local-generator[110556]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:00:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:00:41 localhost systemd[1]: Stopping nova_virtproxyd container... Feb 20 04:00:41 localhost systemd[1]: tmp-crun.FpmuFk.mount: Deactivated successfully. Feb 20 04:00:41 localhost systemd[1]: libpod-2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb.scope: Deactivated successfully. Feb 20 04:00:41 localhost podman[110571]: 2026-02-20 09:00:41.775910398 +0000 UTC m=+0.072696120 container died 2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git, container_name=nova_virtproxyd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 20 04:00:41 localhost podman[110571]: 2026-02-20 09:00:41.813359068 +0000 UTC m=+0.110144760 container cleanup 2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.5, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:31:49Z, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.13, container_name=nova_virtproxyd, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc.) Feb 20 04:00:41 localhost podman[110571]: nova_virtproxyd Feb 20 04:00:41 localhost podman[110586]: 2026-02-20 09:00:41.855248726 +0000 UTC m=+0.063790303 container cleanup 2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., batch=17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, container_name=nova_virtproxyd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt) Feb 20 04:00:41 localhost systemd[1]: libpod-conmon-2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb.scope: Deactivated successfully. Feb 20 04:00:41 localhost podman[110613]: error opening file `/run/crun/2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb/status`: No such file or directory Feb 20 04:00:41 localhost podman[110602]: 2026-02-20 09:00:41.944822124 +0000 UTC m=+0.063035772 container cleanup 2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=nova_virtproxyd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Feb 20 04:00:41 localhost podman[110602]: nova_virtproxyd Feb 20 04:00:41 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully. Feb 20 04:00:41 localhost systemd[1]: Stopped nova_virtproxyd container. Feb 20 04:00:42 localhost systemd[1]: var-lib-containers-storage-overlay-2cbf2ebd7b526307bfea390cad44d9007884856a39422ce430dc5caa1d4f7547-merged.mount: Deactivated successfully. Feb 20 04:00:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2decee24d7d9fda277830ce28e4e7d013f11cfa722d110fbb5f7089f6a275edb-userdata-shm.mount: Deactivated successfully. Feb 20 04:00:42 localhost python3.9[110706]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:00:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21006 DF PROTO=TCP SPT=34964 DPT=9100 SEQ=1350928582 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB480E0000000001030307) Feb 20 04:00:43 localhost systemd[1]: Reloading. Feb 20 04:00:43 localhost systemd-rc-local-generator[110731]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:00:43 localhost systemd-sysv-generator[110736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:00:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:00:43 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully. Feb 20 04:00:43 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m. Feb 20 04:00:44 localhost systemd[1]: Stopping nova_virtqemud container... Feb 20 04:00:44 localhost systemd[1]: libpod-9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3.scope: Deactivated successfully. Feb 20 04:00:44 localhost systemd[1]: libpod-9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3.scope: Consumed 2.035s CPU time. Feb 20 04:00:44 localhost podman[110747]: 2026-02-20 09:00:44.082916395 +0000 UTC m=+0.065218342 container died 9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, io.buildah.version=1.41.5, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=nova_virtqemud, url=https://www.redhat.com, config_id=tripleo_step3) Feb 20 04:00:44 localhost systemd[1]: tmp-crun.AfmPSO.mount: Deactivated successfully. Feb 20 04:00:44 localhost podman[110747]: 2026-02-20 09:00:44.117448336 +0000 UTC m=+0.099750253 container cleanup 9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_virtqemud, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, vcs-type=git, build-date=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-libvirt, batch=17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:00:44 localhost podman[110747]: nova_virtqemud Feb 20 04:00:44 localhost podman[110760]: 2026-02-20 09:00:44.156345654 +0000 UTC m=+0.062545589 container cleanup 9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt, release=1766032510, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtqemud, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 04:00:44 localhost systemd[1]: libpod-conmon-9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3.scope: Deactivated successfully. Feb 20 04:00:44 localhost podman[110791]: error opening file `/run/crun/9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3/status`: No such file or directory Feb 20 04:00:44 localhost podman[110780]: 2026-02-20 09:00:44.248516053 +0000 UTC m=+0.065735105 container cleanup 9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, release=1766032510, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Feb 20 04:00:44 localhost podman[110780]: nova_virtqemud Feb 20 04:00:44 localhost systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully. Feb 20 04:00:44 localhost systemd[1]: Stopped nova_virtqemud container. Feb 20 04:00:44 localhost python3.9[110884]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:00:45 localhost systemd[1]: Reloading. Feb 20 04:00:45 localhost systemd-rc-local-generator[110911]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:00:45 localhost systemd-sysv-generator[110917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:00:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:00:45 localhost systemd[1]: var-lib-containers-storage-overlay-cc92e44651e4d71ee3774f533944e42a9d6ec5c2ed40704387b0dcc01f749328-merged.mount: Deactivated successfully. Feb 20 04:00:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9dfd8c219b5ddf413e0605dc63956512085c88468de14739ea124927bda57eb3-userdata-shm.mount: Deactivated successfully. Feb 20 04:00:46 localhost python3.9[111014]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:00:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6673 DF PROTO=TCP SPT=41266 DPT=9102 SEQ=1416401547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB530D0000000001030307) Feb 20 04:00:46 localhost systemd[1]: Reloading. Feb 20 04:00:46 localhost systemd-rc-local-generator[111040]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:00:46 localhost systemd-sysv-generator[111043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:00:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:00:46 localhost systemd[1]: Stopping nova_virtsecretd container... Feb 20 04:00:46 localhost systemd[1]: libpod-b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24.scope: Deactivated successfully. Feb 20 04:00:46 localhost podman[111054]: 2026-02-20 09:00:46.459274072 +0000 UTC m=+0.083504879 container died b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.openshift.expose-services=, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_virtsecretd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, config_id=tripleo_step3, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, batch=17.1_20260112.1) Feb 20 04:00:46 localhost podman[111054]: 2026-02-20 09:00:46.493672889 +0000 UTC m=+0.117903696 container cleanup b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, release=1766032510, batch=17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_virtsecretd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5) Feb 20 04:00:46 localhost podman[111054]: nova_virtsecretd Feb 20 04:00:46 localhost podman[111069]: 2026-02-20 09:00:46.53868088 +0000 UTC m=+0.067244435 container cleanup b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, container_name=nova_virtsecretd, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp-rhel9/openstack-nova-libvirt, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5) Feb 20 04:00:46 localhost systemd[1]: libpod-conmon-b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24.scope: Deactivated successfully. Feb 20 04:00:46 localhost podman[111097]: error opening file `/run/crun/b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24/status`: No such file or directory Feb 20 04:00:46 localhost podman[111085]: 2026-02-20 09:00:46.633992173 +0000 UTC m=+0.068897289 container cleanup b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtsecretd, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2026-01-12T23:31:49Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true) Feb 20 04:00:46 localhost podman[111085]: nova_virtsecretd Feb 20 04:00:46 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully. Feb 20 04:00:46 localhost systemd[1]: Stopped nova_virtsecretd container. Feb 20 04:00:47 localhost python3.9[111190]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:00:47 localhost systemd[1]: tmp-crun.L9XP2b.mount: Deactivated successfully. Feb 20 04:00:47 localhost systemd[1]: var-lib-containers-storage-overlay-cdc0bde398443eb2e66321c13dbbf9e41b2982bc548dca05f2f16736d23fdffa-merged.mount: Deactivated successfully. Feb 20 04:00:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b23b5eebadef0ba319e1cd31d2de5e8c48225ad3018982ae9de2c20fa5a72c24-userdata-shm.mount: Deactivated successfully. Feb 20 04:00:47 localhost systemd[1]: Reloading. Feb 20 04:00:47 localhost systemd-sysv-generator[111216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:00:47 localhost systemd-rc-local-generator[111210]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:00:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:00:47 localhost systemd[1]: Stopping nova_virtstoraged container... Feb 20 04:00:47 localhost systemd[1]: libpod-2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad.scope: Deactivated successfully. Feb 20 04:00:47 localhost podman[111231]: 2026-02-20 09:00:47.811283671 +0000 UTC m=+0.076334918 container died 2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, vcs-type=git, version=17.1.13, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2026-01-12T23:31:49Z, container_name=nova_virtstoraged, io.buildah.version=1.41.5, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Feb 20 04:00:47 localhost podman[111231]: 2026-02-20 09:00:47.8468609 +0000 UTC m=+0.111912117 container cleanup 2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtstoraged, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, build-date=2026-01-12T23:31:49Z, config_id=tripleo_step3, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://www.redhat.com, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 04:00:47 localhost podman[111231]: nova_virtstoraged Feb 20 04:00:47 localhost podman[111245]: 2026-02-20 09:00:47.937465348 +0000 UTC m=+0.114455465 container cleanup 2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.13, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, container_name=nova_virtstoraged, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1) Feb 20 04:00:47 localhost systemd[1]: libpod-conmon-2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad.scope: Deactivated successfully. Feb 20 04:00:48 localhost podman[111275]: error opening file `/run/crun/2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad/status`: No such file or directory Feb 20 04:00:48 localhost podman[111263]: 2026-02-20 09:00:48.031755733 +0000 UTC m=+0.065923040 container cleanup 2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, version=17.1.13, distribution-scope=public, release=1766032510, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ca9e756af36a4b8ed088db0b68d5c381'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=nova_virtstoraged, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, tcib_managed=true, architecture=x86_64, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt) Feb 20 04:00:48 localhost podman[111263]: nova_virtstoraged Feb 20 04:00:48 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully. Feb 20 04:00:48 localhost systemd[1]: Stopped nova_virtstoraged container. Feb 20 04:00:48 localhost systemd[1]: var-lib-containers-storage-overlay-628013e23f23f7919f5bdb2ab66333f8efec1d8edc4c51b2f985707fbf7e0352-merged.mount: Deactivated successfully. Feb 20 04:00:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f49240a2b9dab4d27f2910ba564a47977d8787509944c6c0e1de333d33dc4ad-userdata-shm.mount: Deactivated successfully. Feb 20 04:00:48 localhost python3.9[111368]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:00:48 localhost systemd[1]: Reloading. Feb 20 04:00:48 localhost systemd-rc-local-generator[111391]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:00:48 localhost systemd-sysv-generator[111398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:00:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:00:49 localhost systemd[1]: Stopping ovn_controller container... Feb 20 04:00:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65390 DF PROTO=TCP SPT=49180 DPT=9882 SEQ=1847910549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB5F4D0000000001030307) Feb 20 04:00:49 localhost systemd[1]: libpod-5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.scope: Deactivated successfully. Feb 20 04:00:49 localhost systemd[1]: libpod-5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.scope: Consumed 2.462s CPU time. Feb 20 04:00:49 localhost podman[111409]: 2026-02-20 09:00:49.306761207 +0000 UTC m=+0.126795273 container died 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ovn_controller, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1) Feb 20 04:00:49 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.timer: Deactivated successfully. Feb 20 04:00:49 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367. Feb 20 04:00:49 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed to open /run/systemd/transient/5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: No such file or directory Feb 20 04:00:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367-userdata-shm.mount: Deactivated successfully. Feb 20 04:00:49 localhost podman[111409]: 2026-02-20 09:00:49.343727924 +0000 UTC m=+0.163761970 container cleanup 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, release=1766032510, config_id=tripleo_step4, container_name=ovn_controller) Feb 20 04:00:49 localhost podman[111409]: ovn_controller Feb 20 04:00:49 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.timer: Failed to open /run/systemd/transient/5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.timer: No such file or directory Feb 20 04:00:49 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed to open /run/systemd/transient/5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: No such file or directory Feb 20 04:00:49 localhost podman[111424]: 2026-02-20 09:00:49.38255567 +0000 UTC m=+0.067376669 container cleanup 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, batch=17.1_20260112.1) Feb 20 04:00:49 localhost systemd[1]: libpod-conmon-5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.scope: Deactivated successfully. Feb 20 04:00:49 localhost systemd[1]: var-lib-containers-storage-overlay-e301de541839455328f58c15918525438472e11d40dc3ec7722a3fc836c44350-merged.mount: Deactivated successfully. Feb 20 04:00:49 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.timer: Failed to open /run/systemd/transient/5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.timer: No such file or directory Feb 20 04:00:49 localhost systemd[1]: 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: Failed to open /run/systemd/transient/5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367.service: No such file or directory Feb 20 04:00:49 localhost podman[111438]: 2026-02-20 09:00:49.477694988 +0000 UTC m=+0.064100961 container cleanup 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.buildah.version=1.41.5, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1766032510, io.openshift.expose-services=, container_name=ovn_controller, architecture=x86_64, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z) Feb 20 04:00:49 localhost podman[111438]: ovn_controller Feb 20 04:00:49 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully. Feb 20 04:00:49 localhost systemd[1]: Stopped ovn_controller container. Feb 20 04:00:50 localhost python3.9[111543]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:00:50 localhost systemd[1]: Reloading. Feb 20 04:00:50 localhost systemd-rc-local-generator[111572]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:00:50 localhost systemd-sysv-generator[111577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:00:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:00:50 localhost systemd[1]: Stopping ovn_metadata_agent container... Feb 20 04:00:51 localhost systemd[1]: libpod-34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.scope: Deactivated successfully. Feb 20 04:00:51 localhost systemd[1]: libpod-34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.scope: Consumed 9.150s CPU time. Feb 20 04:00:51 localhost podman[111598]: 2026-02-20 09:00:51.449770719 +0000 UTC m=+0.915130574 container died 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1) Feb 20 04:00:51 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.timer: Deactivated successfully. Feb 20 04:00:51 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7. Feb 20 04:00:51 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed to open /run/systemd/transient/34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: No such file or directory Feb 20 04:00:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7-userdata-shm.mount: Deactivated successfully. Feb 20 04:00:51 localhost podman[111598]: 2026-02-20 09:00:51.513030387 +0000 UTC m=+0.978390192 container cleanup 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0) Feb 20 04:00:51 localhost podman[111598]: ovn_metadata_agent Feb 20 04:00:51 localhost systemd[1]: var-lib-containers-storage-overlay-9f3f94cec83e204d6816769a4a7f242e660c4b352d77cc80230032ff7b68a014-merged.mount: Deactivated successfully. Feb 20 04:00:51 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.timer: Failed to open /run/systemd/transient/34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.timer: No such file or directory Feb 20 04:00:51 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed to open /run/systemd/transient/34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: No such file or directory Feb 20 04:00:51 localhost podman[111659]: 2026-02-20 09:00:51.597273514 +0000 UTC m=+0.138828154 container cleanup 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 20 04:00:51 localhost systemd[1]: libpod-conmon-34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.scope: Deactivated successfully. Feb 20 04:00:51 localhost podman[111687]: error opening file `/run/crun/34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7/status`: No such file or directory Feb 20 04:00:51 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.timer: Failed to open /run/systemd/transient/34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.timer: No such file or directory Feb 20 04:00:51 localhost systemd[1]: 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: Failed to open /run/systemd/transient/34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7.service: No such file or directory Feb 20 04:00:51 localhost podman[111675]: 2026-02-20 09:00:51.715615531 +0000 UTC m=+0.079673186 container cleanup 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, distribution-scope=public, tcib_managed=true, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn) Feb 20 04:00:51 localhost podman[111675]: ovn_metadata_agent Feb 20 04:00:51 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully. Feb 20 04:00:51 localhost systemd[1]: Stopped ovn_metadata_agent container. Feb 20 04:00:52 localhost python3.9[111780]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:00:52 localhost systemd[1]: Reloading. Feb 20 04:00:52 localhost systemd-sysv-generator[111813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:00:52 localhost systemd-rc-local-generator[111808]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:00:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:00:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65391 DF PROTO=TCP SPT=49180 DPT=9882 SEQ=1847910549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB6F0D0000000001030307) Feb 20 04:00:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56322 DF PROTO=TCP SPT=50986 DPT=9100 SEQ=984355842 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB81240000000001030307) Feb 20 04:00:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10770 DF PROTO=TCP SPT=52384 DPT=9105 SEQ=2163886401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB82410000000001030307) Feb 20 04:01:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56324 DF PROTO=TCP SPT=50986 DPT=9100 SEQ=984355842 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB8D0E0000000001030307) Feb 20 04:01:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56325 DF PROTO=TCP SPT=50986 DPT=9100 SEQ=984355842 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEB9CCE0000000001030307) Feb 20 04:01:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10754 DF PROTO=TCP SPT=52168 DPT=9101 SEQ=1803440756 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEBA60D0000000001030307) Feb 20 04:01:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18999 DF PROTO=TCP SPT=35224 DPT=9101 SEQ=4103322221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEBB00D0000000001030307) Feb 20 04:01:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17384 DF PROTO=TCP SPT=49040 DPT=9102 SEQ=1086917298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEBBC0D0000000001030307) Feb 20 04:01:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1744 DF PROTO=TCP SPT=56434 DPT=9102 SEQ=1616005800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEBC80D0000000001030307) Feb 20 04:01:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53119 DF PROTO=TCP SPT=37792 DPT=9882 SEQ=3271976021 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEBD48D0000000001030307) Feb 20 04:01:21 localhost sshd[111859]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:01:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53120 DF PROTO=TCP SPT=37792 DPT=9882 SEQ=3271976021 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEBE44E0000000001030307) Feb 20 04:01:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37374 DF PROTO=TCP SPT=37296 DPT=9100 SEQ=884177755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEBF6540000000001030307) Feb 20 04:01:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21911 DF PROTO=TCP SPT=49342 DPT=9105 SEQ=2156602511 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEBF7720000000001030307) Feb 20 04:01:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37376 DF PROTO=TCP SPT=37296 DPT=9100 SEQ=884177755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC024D0000000001030307) Feb 20 04:01:34 localhost sshd[111861]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:01:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37377 DF PROTO=TCP SPT=37296 DPT=9100 SEQ=884177755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC120E0000000001030307) Feb 20 04:01:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39439 DF PROTO=TCP SPT=33422 DPT=9101 SEQ=287251404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC1C0D0000000001030307) Feb 20 04:01:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53974 DF PROTO=TCP SPT=36580 DPT=9102 SEQ=4160782473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC258D0000000001030307) Feb 20 04:01:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37378 DF PROTO=TCP SPT=37296 DPT=9100 SEQ=884177755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC320D0000000001030307) Feb 20 04:01:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53976 DF PROTO=TCP SPT=36580 DPT=9102 SEQ=4160782473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC3D4D0000000001030307) Feb 20 04:01:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24430 DF PROTO=TCP SPT=34116 DPT=9882 SEQ=1662161269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC498D0000000001030307) Feb 20 04:01:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24431 DF PROTO=TCP SPT=34116 DPT=9882 SEQ=1662161269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC594D0000000001030307) Feb 20 04:01:57 localhost sshd[111991]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:01:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53345 DF PROTO=TCP SPT=33014 DPT=9100 SEQ=1380222940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC6B830000000001030307) Feb 20 04:01:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=880 DF PROTO=TCP SPT=56762 DPT=9105 SEQ=1908108993 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC6CA20000000001030307) Feb 20 04:02:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53347 DF PROTO=TCP SPT=33014 DPT=9100 SEQ=1380222940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC778D0000000001030307) Feb 20 04:02:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53348 DF PROTO=TCP SPT=33014 DPT=9100 SEQ=1380222940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC874D0000000001030307) Feb 20 04:02:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27890 DF PROTO=TCP SPT=42560 DPT=9101 SEQ=2184530540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC920E0000000001030307) Feb 20 04:02:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6337 DF PROTO=TCP SPT=37188 DPT=9102 SEQ=3932129645 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEC9ACE0000000001030307) Feb 20 04:02:12 localhost systemd[1]: session-36.scope: Deactivated successfully. Feb 20 04:02:12 localhost systemd[1]: session-36.scope: Consumed 19.033s CPU time. Feb 20 04:02:12 localhost systemd-logind[760]: Session 36 logged out. Waiting for processes to exit. Feb 20 04:02:12 localhost systemd-logind[760]: Removed session 36. Feb 20 04:02:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53349 DF PROTO=TCP SPT=33014 DPT=9100 SEQ=1380222940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AECA80D0000000001030307) Feb 20 04:02:13 localhost sshd[111993]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:02:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6339 DF PROTO=TCP SPT=37188 DPT=9102 SEQ=3932129645 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AECB28E0000000001030307) Feb 20 04:02:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51422 DF PROTO=TCP SPT=37872 DPT=9882 SEQ=99810236 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AECBECD0000000001030307) Feb 20 04:02:20 localhost sshd[111995]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:02:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51423 DF PROTO=TCP SPT=37872 DPT=9882 SEQ=99810236 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AECCE8D0000000001030307) Feb 20 04:02:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19192 DF PROTO=TCP SPT=40844 DPT=9100 SEQ=674800386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AECE0B40000000001030307) Feb 20 04:02:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65293 DF PROTO=TCP SPT=50680 DPT=9105 SEQ=2100726976 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AECE1D20000000001030307) Feb 20 04:02:28 localhost sshd[111998]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:02:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19194 DF PROTO=TCP SPT=40844 DPT=9100 SEQ=674800386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AECECCE0000000001030307) Feb 20 04:02:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19195 DF PROTO=TCP SPT=40844 DPT=9100 SEQ=674800386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AECFC8D0000000001030307) Feb 20 04:02:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37306 DF PROTO=TCP SPT=56842 DPT=9101 SEQ=2209385878 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED060D0000000001030307) Feb 20 04:02:38 localhost sshd[112000]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:02:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20906 DF PROTO=TCP SPT=35680 DPT=9102 SEQ=1453231648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED100D0000000001030307) Feb 20 04:02:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53979 DF PROTO=TCP SPT=36580 DPT=9102 SEQ=4160782473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED1C0D0000000001030307) Feb 20 04:02:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20908 DF PROTO=TCP SPT=35680 DPT=9102 SEQ=1453231648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED27CD0000000001030307) Feb 20 04:02:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25500 DF PROTO=TCP SPT=47946 DPT=9882 SEQ=388619254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED344D0000000001030307) Feb 20 04:02:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25501 DF PROTO=TCP SPT=47946 DPT=9882 SEQ=388619254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED440E0000000001030307) Feb 20 04:02:53 localhost sshd[112002]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:02:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=537 DF PROTO=TCP SPT=33420 DPT=9100 SEQ=1444655034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED55E30000000001030307) Feb 20 04:02:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8431 DF PROTO=TCP SPT=45324 DPT=9105 SEQ=2056375811 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED57010000000001030307) Feb 20 04:03:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=539 DF PROTO=TCP SPT=33420 DPT=9100 SEQ=1444655034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED61CD0000000001030307) Feb 20 04:03:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=540 DF PROTO=TCP SPT=33420 DPT=9100 SEQ=1444655034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED718D0000000001030307) Feb 20 04:03:06 localhost sshd[112082]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:03:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56056 DF PROTO=TCP SPT=38920 DPT=9101 SEQ=177858483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED7C0D0000000001030307) Feb 20 04:03:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24359 DF PROTO=TCP SPT=33584 DPT=9102 SEQ=4036193676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED850D0000000001030307) Feb 20 04:03:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=541 DF PROTO=TCP SPT=33420 DPT=9100 SEQ=1444655034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED920E0000000001030307) Feb 20 04:03:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24361 DF PROTO=TCP SPT=33584 DPT=9102 SEQ=4036193676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AED9CCD0000000001030307) Feb 20 04:03:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19115 DF PROTO=TCP SPT=35304 DPT=9882 SEQ=4089438904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEDA94D0000000001030307) Feb 20 04:03:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19116 DF PROTO=TCP SPT=35304 DPT=9882 SEQ=4089438904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEDB90E0000000001030307) Feb 20 04:03:24 localhost sshd[112084]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:03:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25704 DF PROTO=TCP SPT=49312 DPT=9100 SEQ=1844910942 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEDCB140000000001030307) Feb 20 04:03:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52318 DF PROTO=TCP SPT=48148 DPT=9105 SEQ=1888978077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEDCC310000000001030307) Feb 20 04:03:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25706 DF PROTO=TCP SPT=49312 DPT=9100 SEQ=1844910942 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEDD70D0000000001030307) Feb 20 04:03:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25707 DF PROTO=TCP SPT=49312 DPT=9100 SEQ=1844910942 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEDE6CD0000000001030307) Feb 20 04:03:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56516 DF PROTO=TCP SPT=60346 DPT=9101 SEQ=2134119730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEDF00D0000000001030307) Feb 20 04:03:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56057 DF PROTO=TCP SPT=38920 DPT=9101 SEQ=177858483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEDFA0D0000000001030307) Feb 20 04:03:41 localhost sshd[112086]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:03:41 localhost systemd-logind[760]: New session 37 of user zuul. Feb 20 04:03:41 localhost systemd[1]: Started Session 37 of User zuul. Feb 20 04:03:42 localhost python3.9[112167]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20911 DF PROTO=TCP SPT=35680 DPT=9102 SEQ=1453231648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE060D0000000001030307) Feb 20 04:03:43 localhost python3.9[112259]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:43 localhost python3.9[112351]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:44 localhost python3.9[112443]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:44 localhost python3.9[112535]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:45 localhost python3.9[112627]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:45 localhost python3.9[112719]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48724 DF PROTO=TCP SPT=56512 DPT=9102 SEQ=2728144405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE120E0000000001030307) Feb 20 04:03:46 localhost python3.9[112811]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:46 localhost python3.9[112903]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:47 localhost python3.9[112995]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:48 localhost python3.9[113087]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:48 localhost python3.9[113179]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22417 DF PROTO=TCP SPT=42686 DPT=9882 SEQ=3645215084 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE1E4E0000000001030307) Feb 20 04:03:49 localhost python3.9[113271]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:49 localhost python3.9[113363]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:50 localhost python3.9[113455]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:50 localhost python3.9[113547]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:51 localhost python3.9[113639]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:52 localhost python3.9[113731]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:52 localhost python3.9[113823]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22418 DF PROTO=TCP SPT=42686 DPT=9882 SEQ=3645215084 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE2E0D0000000001030307) Feb 20 04:03:53 localhost python3.9[113915]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:53 localhost python3.9[114007]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:54 localhost python3.9[114099]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:55 localhost python3.9[114191]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:56 localhost python3.9[114283]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:56 localhost python3.9[114375]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:57 localhost python3.9[114467]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:57 localhost python3.9[114559]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53596 DF PROTO=TCP SPT=55534 DPT=9100 SEQ=1923172067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE40430000000001030307) Feb 20 04:03:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6196 DF PROTO=TCP SPT=33302 DPT=9105 SEQ=2706578609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE41620000000001030307) Feb 20 04:03:58 localhost python3.9[114681]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:58 localhost podman[114846]: 2026-02-20 09:03:58.821365887 +0000 UTC m=+0.087478414 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main, io.openshift.expose-services=, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64) Feb 20 04:03:58 localhost podman[114846]: 2026-02-20 09:03:58.954645437 +0000 UTC m=+0.220757934 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, RELEASE=main, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, release=1770267347, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.buildah.version=1.42.2, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7) Feb 20 04:03:58 localhost python3.9[114859]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:03:59 localhost python3.9[115034]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:00 localhost python3.9[115157]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:00 localhost sshd[115196]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:04:00 localhost python3.9[115266]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53598 DF PROTO=TCP SPT=55534 DPT=9100 SEQ=1923172067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE4C4D0000000001030307) Feb 20 04:04:01 localhost python3.9[115358]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:01 localhost python3.9[115450]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:02 localhost python3.9[115542]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:02 localhost python3.9[115634]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:03 localhost python3.9[115726]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:03 localhost sshd[115727]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:04:04 localhost python3.9[115820]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:04 localhost python3.9[115912]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53599 DF PROTO=TCP SPT=55534 DPT=9100 SEQ=1923172067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE5C0D0000000001030307) Feb 20 04:04:05 localhost python3.9[116004]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:05 localhost python3.9[116096]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:06 localhost python3.9[116188]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:07 localhost python3.9[116280]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15166 DF PROTO=TCP SPT=41040 DPT=9101 SEQ=398290623 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE660D0000000001030307) Feb 20 04:04:08 localhost python3.9[116372]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 20 04:04:09 localhost python3.9[116464]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 04:04:09 localhost systemd[1]: Reloading. Feb 20 04:04:09 localhost systemd-rc-local-generator[116489]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:04:09 localhost systemd-sysv-generator[116492]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:04:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:04:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45722 DF PROTO=TCP SPT=47710 DPT=9102 SEQ=3475241532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE6F8D0000000001030307) Feb 20 04:04:11 localhost python3.9[116592]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:12 localhost python3.9[116685]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:13 localhost python3.9[116778]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53600 DF PROTO=TCP SPT=55534 DPT=9100 SEQ=1923172067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE7C0D0000000001030307) Feb 20 04:04:13 localhost python3.9[116871]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:14 localhost python3.9[116964]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:15 localhost python3.9[117057]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45724 DF PROTO=TCP SPT=47710 DPT=9102 SEQ=3475241532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE874D0000000001030307) Feb 20 04:04:16 localhost python3.9[117150]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:17 localhost python3.9[117243]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:17 localhost python3.9[117336]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:18 localhost python3.9[117429]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:18 localhost python3.9[117522]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=614 DF PROTO=TCP SPT=36168 DPT=9882 SEQ=3110148295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEE938F0000000001030307) Feb 20 04:04:19 localhost python3.9[117615]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:20 localhost python3.9[117708]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:20 localhost sshd[117795]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:04:20 localhost python3.9[117802]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:21 localhost sshd[117819]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:04:21 localhost python3.9[117897]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:22 localhost python3.9[117991]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:22 localhost python3.9[118084]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=615 DF PROTO=TCP SPT=36168 DPT=9882 SEQ=3110148295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEEA34E0000000001030307) Feb 20 04:04:23 localhost python3.9[118177]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:24 localhost python3.9[118270]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:25 localhost python3.9[118363]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:26 localhost python3.9[118456]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:26 localhost systemd[1]: session-37.scope: Deactivated successfully. Feb 20 04:04:26 localhost systemd[1]: session-37.scope: Consumed 30.709s CPU time. Feb 20 04:04:26 localhost systemd-logind[760]: Session 37 logged out. Waiting for processes to exit. Feb 20 04:04:26 localhost systemd-logind[760]: Removed session 37. Feb 20 04:04:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26735 DF PROTO=TCP SPT=32938 DPT=9100 SEQ=681487195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEEB5750000000001030307) Feb 20 04:04:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42835 DF PROTO=TCP SPT=41370 DPT=9105 SEQ=4113860683 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEEB6910000000001030307) Feb 20 04:04:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26737 DF PROTO=TCP SPT=32938 DPT=9100 SEQ=681487195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEEC18D0000000001030307) Feb 20 04:04:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26738 DF PROTO=TCP SPT=32938 DPT=9100 SEQ=681487195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEED14D0000000001030307) Feb 20 04:04:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45480 DF PROTO=TCP SPT=47686 DPT=9101 SEQ=1350362334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEEDA0D0000000001030307) Feb 20 04:04:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28260 DF PROTO=TCP SPT=44602 DPT=9102 SEQ=2037423509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEEE4CD0000000001030307) Feb 20 04:04:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48727 DF PROTO=TCP SPT=56512 DPT=9102 SEQ=2728144405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEEF00D0000000001030307) Feb 20 04:04:44 localhost sshd[118472]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:04:44 localhost systemd-logind[760]: New session 38 of user zuul. Feb 20 04:04:44 localhost systemd[1]: Started Session 38 of User zuul. Feb 20 04:04:45 localhost python3.9[118565]: ansible-ansible.legacy.ping Invoked with data=pong Feb 20 04:04:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28262 DF PROTO=TCP SPT=44602 DPT=9102 SEQ=2037423509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEEFC8D0000000001030307) Feb 20 04:04:46 localhost python3.9[118669]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:04:47 localhost python3.9[118761]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:04:48 localhost python3.9[118854]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:04:49 localhost python3.9[118946]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13650 DF PROTO=TCP SPT=36408 DPT=9882 SEQ=71831261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF08CD0000000001030307) Feb 20 04:04:49 localhost python3.9[119038]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:04:50 localhost python3.9[119111]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578289.2050688-172-139378286422343/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:51 localhost python3.9[119203]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:04:52 localhost python3.9[119299]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:04:52 localhost python3.9[119391]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:04:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13651 DF PROTO=TCP SPT=36408 DPT=9882 SEQ=71831261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF188D0000000001030307) Feb 20 04:04:53 localhost python3.9[119481]: ansible-ansible.builtin.service_facts Invoked Feb 20 04:04:53 localhost network[119498]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 04:04:53 localhost network[119499]: 'network-scripts' will be removed from distribution in near future. Feb 20 04:04:53 localhost network[119500]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 04:04:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:04:57 localhost python3.9[119698]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:04:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40308 DF PROTO=TCP SPT=38164 DPT=9100 SEQ=2674634058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF2AA30000000001030307) Feb 20 04:04:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61783 DF PROTO=TCP SPT=49804 DPT=9105 SEQ=3943703495 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF2BC20000000001030307) Feb 20 04:04:58 localhost python3.9[119788]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:04:59 localhost python3.9[119884]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:05:00 localhost sshd[119925]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:05:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40310 DF PROTO=TCP SPT=38164 DPT=9100 SEQ=2674634058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF368D0000000001030307) Feb 20 04:05:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40311 DF PROTO=TCP SPT=38164 DPT=9100 SEQ=2674634058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF464D0000000001030307) Feb 20 04:05:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27343 DF PROTO=TCP SPT=36332 DPT=9101 SEQ=1347718426 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF500D0000000001030307) Feb 20 04:05:09 localhost systemd[1]: Stopping OpenSSH server daemon... Feb 20 04:05:09 localhost systemd[1]: sshd.service: Deactivated successfully. Feb 20 04:05:09 localhost systemd[1]: Stopped OpenSSH server daemon. Feb 20 04:05:09 localhost systemd[1]: sshd.service: Consumed 11.353s CPU time. Feb 20 04:05:09 localhost systemd[1]: Stopped target sshd-keygen.target. Feb 20 04:05:09 localhost systemd[1]: Stopping sshd-keygen.target... Feb 20 04:05:09 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 04:05:09 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 04:05:09 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 04:05:09 localhost systemd[1]: Reached target sshd-keygen.target. Feb 20 04:05:09 localhost systemd[1]: Starting OpenSSH server daemon... Feb 20 04:05:09 localhost sshd[120006]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:05:09 localhost systemd[1]: Started OpenSSH server daemon. Feb 20 04:05:09 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 04:05:09 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 04:05:09 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 04:05:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45493 DF PROTO=TCP SPT=59964 DPT=9102 SEQ=3929253849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF59CD0000000001030307) Feb 20 04:05:10 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 04:05:10 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 04:05:10 localhost systemd[1]: run-r539f572d9b004f7caed8f3c114cabb42.service: Deactivated successfully. Feb 20 04:05:10 localhost systemd[1]: run-rcb7b3d66d0604517bc5a6e8e5e0fee17.service: Deactivated successfully. Feb 20 04:05:10 localhost systemd[1]: Stopping OpenSSH server daemon... Feb 20 04:05:10 localhost systemd[1]: sshd.service: Deactivated successfully. Feb 20 04:05:10 localhost systemd[1]: Stopped OpenSSH server daemon. Feb 20 04:05:10 localhost systemd[1]: Stopped target sshd-keygen.target. Feb 20 04:05:10 localhost systemd[1]: Stopping sshd-keygen.target... Feb 20 04:05:10 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 04:05:10 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 04:05:10 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 04:05:10 localhost systemd[1]: Reached target sshd-keygen.target. Feb 20 04:05:10 localhost systemd[1]: Starting OpenSSH server daemon... Feb 20 04:05:10 localhost sshd[120181]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:05:10 localhost systemd[1]: Started OpenSSH server daemon. Feb 20 04:05:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45727 DF PROTO=TCP SPT=47710 DPT=9102 SEQ=3475241532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF660D0000000001030307) Feb 20 04:05:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45495 DF PROTO=TCP SPT=59964 DPT=9102 SEQ=3929253849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF718D0000000001030307) Feb 20 04:05:18 localhost sshd[120221]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:05:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45620 DF PROTO=TCP SPT=45576 DPT=9882 SEQ=2635295760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF7E0D0000000001030307) Feb 20 04:05:21 localhost sshd[120277]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:05:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45621 DF PROTO=TCP SPT=45576 DPT=9882 SEQ=2635295760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF8DCE0000000001030307) Feb 20 04:05:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44477 DF PROTO=TCP SPT=41152 DPT=9100 SEQ=1940275319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEF9FD40000000001030307) Feb 20 04:05:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37206 DF PROTO=TCP SPT=37502 DPT=9105 SEQ=583412195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEFA0F20000000001030307) Feb 20 04:05:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44479 DF PROTO=TCP SPT=41152 DPT=9100 SEQ=1940275319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEFABCD0000000001030307) Feb 20 04:05:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44480 DF PROTO=TCP SPT=41152 DPT=9100 SEQ=1940275319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEFBB8E0000000001030307) Feb 20 04:05:36 localhost sshd[120311]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:05:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40231 DF PROTO=TCP SPT=53964 DPT=9101 SEQ=681835345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEFC60E0000000001030307) Feb 20 04:05:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52297 DF PROTO=TCP SPT=41060 DPT=9102 SEQ=1619576610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEFCF0D0000000001030307) Feb 20 04:05:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44481 DF PROTO=TCP SPT=41152 DPT=9100 SEQ=1940275319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEFDC0D0000000001030307) Feb 20 04:05:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52299 DF PROTO=TCP SPT=41060 DPT=9102 SEQ=1619576610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEFE6CD0000000001030307) Feb 20 04:05:47 localhost sshd[120325]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:05:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34396 DF PROTO=TCP SPT=56646 DPT=9882 SEQ=2911302442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AEFF30D0000000001030307) Feb 20 04:05:51 localhost sshd[120327]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:05:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34397 DF PROTO=TCP SPT=56646 DPT=9882 SEQ=2911302442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF002CD0000000001030307) Feb 20 04:05:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8445 DF PROTO=TCP SPT=59854 DPT=9100 SEQ=2284647313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF015040000000001030307) Feb 20 04:05:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8285 DF PROTO=TCP SPT=37496 DPT=9105 SEQ=4255652673 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF016220000000001030307) Feb 20 04:06:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8447 DF PROTO=TCP SPT=59854 DPT=9100 SEQ=2284647313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0210D0000000001030307) Feb 20 04:06:04 localhost sshd[120598]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:06:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8448 DF PROTO=TCP SPT=59854 DPT=9100 SEQ=2284647313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF030CD0000000001030307) Feb 20 04:06:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33336 DF PROTO=TCP SPT=36620 DPT=9101 SEQ=2587310498 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF03A0D0000000001030307) Feb 20 04:06:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40232 DF PROTO=TCP SPT=53964 DPT=9101 SEQ=681835345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0440D0000000001030307) Feb 20 04:06:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45498 DF PROTO=TCP SPT=59964 DPT=9102 SEQ=3929253849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0500E0000000001030307) Feb 20 04:06:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44031 DF PROTO=TCP SPT=51072 DPT=9102 SEQ=2502694803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF05C0D0000000001030307) Feb 20 04:06:16 localhost sshd[120633]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:06:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5067 DF PROTO=TCP SPT=49934 DPT=9882 SEQ=3446497692 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0684E0000000001030307) Feb 20 04:06:21 localhost kernel: SELinux: Converting 2741 SID table entries... Feb 20 04:06:21 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 04:06:21 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 04:06:21 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 04:06:21 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 04:06:21 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 04:06:21 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 04:06:21 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 04:06:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5068 DF PROTO=TCP SPT=49934 DPT=9882 SEQ=3446497692 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0780D0000000001030307) Feb 20 04:06:23 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=17 res=1 Feb 20 04:06:23 localhost python3.9[120833]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:06:24 localhost python3.9[120925]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:06:24 localhost python3.9[120998]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578383.9108403-421-205416038255104/.source.fact _original_basename=.qvn584cb follow=False checksum=d686dccd4d8cd0883f3e3bc0a6f664c73290ba68 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:06:25 localhost python3.9[121088]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:06:26 localhost python3.9[121186]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:06:27 localhost python3.9[121240]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:06:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40936 DF PROTO=TCP SPT=53318 DPT=9100 SEQ=3338147308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF08A360000000001030307) Feb 20 04:06:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7931 DF PROTO=TCP SPT=36094 DPT=9105 SEQ=3654570705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF08B520000000001030307) Feb 20 04:06:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40938 DF PROTO=TCP SPT=53318 DPT=9100 SEQ=3338147308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0964D0000000001030307) Feb 20 04:06:31 localhost systemd[1]: Reloading. Feb 20 04:06:31 localhost systemd-sysv-generator[121281]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:06:31 localhost systemd-rc-local-generator[121276]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:06:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:06:31 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 20 04:06:33 localhost python3.9[121380]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:06:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40939 DF PROTO=TCP SPT=53318 DPT=9100 SEQ=3338147308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0A60E0000000001030307) Feb 20 04:06:35 localhost python3.9[121619]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False Feb 20 04:06:36 localhost python3.9[121711]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None Feb 20 04:06:37 localhost python3.9[121804]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:06:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58937 DF PROTO=TCP SPT=40902 DPT=9101 SEQ=1886149764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0B00D0000000001030307) Feb 20 04:06:38 localhost python3.9[121896]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None Feb 20 04:06:39 localhost python3.9[121988]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:06:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41037 DF PROTO=TCP SPT=39390 DPT=9102 SEQ=2999599742 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0B98D0000000001030307) Feb 20 04:06:40 localhost python3.9[122080]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:06:40 localhost python3.9[122153]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578399.72994-745-241018482440901/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:06:41 localhost python3.9[122245]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:06:43 localhost python3.9[122339]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None Feb 20 04:06:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40940 DF PROTO=TCP SPT=53318 DPT=9100 SEQ=3338147308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0C60D0000000001030307) Feb 20 04:06:44 localhost python3.9[122432]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None Feb 20 04:06:45 localhost python3.9[122525]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Feb 20 04:06:45 localhost python3.9[122623]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None Feb 20 04:06:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41039 DF PROTO=TCP SPT=39390 DPT=9102 SEQ=2999599742 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0D14D0000000001030307) Feb 20 04:06:46 localhost python3.9[122715]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:06:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41464 DF PROTO=TCP SPT=59274 DPT=9882 SEQ=1683564841 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0DD8E0000000001030307) Feb 20 04:06:51 localhost python3.9[122809]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:06:52 localhost python3.9[122902]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:06:52 localhost python3.9[122975]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578411.7034051-1018-172030053114873/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 20 04:06:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41465 DF PROTO=TCP SPT=59274 DPT=9882 SEQ=1683564841 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0ED4D0000000001030307) Feb 20 04:06:53 localhost python3.9[123067]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:06:53 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 20 04:06:53 localhost systemd[1]: Stopped Load Kernel Modules. Feb 20 04:06:53 localhost systemd[1]: Stopping Load Kernel Modules... Feb 20 04:06:53 localhost systemd[1]: Starting Load Kernel Modules... Feb 20 04:06:53 localhost systemd-modules-load[123071]: Module 'msr' is built in Feb 20 04:06:53 localhost systemd[1]: Finished Load Kernel Modules. Feb 20 04:06:54 localhost python3.9[123164]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:06:54 localhost python3.9[123237]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578413.9859602-1087-278486044417736/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 20 04:06:56 localhost python3.9[123329]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:06:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44489 DF PROTO=TCP SPT=44506 DPT=9100 SEQ=2402341865 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF0FF630000000001030307) Feb 20 04:06:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43077 DF PROTO=TCP SPT=51340 DPT=9105 SEQ=3467821456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF100820000000001030307) Feb 20 04:07:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44491 DF PROTO=TCP SPT=44506 DPT=9100 SEQ=2402341865 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF10B4D0000000001030307) Feb 20 04:07:03 localhost python3.9[123437]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:07:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44492 DF PROTO=TCP SPT=44506 DPT=9100 SEQ=2402341865 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF11B0D0000000001030307) Feb 20 04:07:05 localhost python3.9[123576]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Feb 20 04:07:05 localhost python3.9[123681]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:07:06 localhost python3.9[123773]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:07:06 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Feb 20 04:07:06 localhost systemd[1]: tuned.service: Deactivated successfully. Feb 20 04:07:06 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Feb 20 04:07:06 localhost systemd[1]: tuned.service: Consumed 1.708s CPU time, no IO. Feb 20 04:07:06 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Feb 20 04:07:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13547 DF PROTO=TCP SPT=41780 DPT=9101 SEQ=197993020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1240E0000000001030307) Feb 20 04:07:07 localhost systemd[1]: Started Dynamic System Tuning Daemon. Feb 20 04:07:09 localhost python3.9[123875]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Feb 20 04:07:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53397 DF PROTO=TCP SPT=59044 DPT=9102 SEQ=4047376429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF12E8D0000000001030307) Feb 20 04:07:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44034 DF PROTO=TCP SPT=51072 DPT=9102 SEQ=2502694803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF13A0D0000000001030307) Feb 20 04:07:13 localhost python3.9[123967]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:07:13 localhost sshd[123970]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:07:14 localhost sshd[123972]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:07:14 localhost systemd[1]: Reloading. Feb 20 04:07:14 localhost systemd-rc-local-generator[123999]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:07:14 localhost systemd-sysv-generator[124003]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:07:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:07:15 localhost python3.9[124100]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:07:15 localhost systemd[1]: Reloading. Feb 20 04:07:15 localhost systemd-sysv-generator[124129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:07:15 localhost systemd-rc-local-generator[124125]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:07:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:07:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53399 DF PROTO=TCP SPT=59044 DPT=9102 SEQ=4047376429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1464D0000000001030307) Feb 20 04:07:16 localhost python3.9[124230]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:07:17 localhost python3.9[124323]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:07:17 localhost kernel: Adding 1048572k swap on /swap. Priority:-2 extents:1 across:1048572k FS Feb 20 04:07:18 localhost python3.9[124416]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:07:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9557 DF PROTO=TCP SPT=34776 DPT=9882 SEQ=1853246014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF152CD0000000001030307) Feb 20 04:07:19 localhost python3.9[124515]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:07:20 localhost python3.9[124608]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:07:20 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 20 04:07:20 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 20 04:07:20 localhost systemd[1]: Stopping Apply Kernel Variables... Feb 20 04:07:20 localhost systemd[1]: Starting Apply Kernel Variables... Feb 20 04:07:20 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 20 04:07:20 localhost systemd[1]: Finished Apply Kernel Variables. Feb 20 04:07:21 localhost systemd[1]: session-38.scope: Deactivated successfully. Feb 20 04:07:21 localhost systemd[1]: session-38.scope: Consumed 1min 56.457s CPU time. Feb 20 04:07:21 localhost systemd-logind[760]: Session 38 logged out. Waiting for processes to exit. Feb 20 04:07:21 localhost systemd-logind[760]: Removed session 38. Feb 20 04:07:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9558 DF PROTO=TCP SPT=34776 DPT=9882 SEQ=1853246014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1628D0000000001030307) Feb 20 04:07:26 localhost sshd[124629]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:07:26 localhost systemd-logind[760]: New session 39 of user zuul. Feb 20 04:07:26 localhost systemd[1]: Started Session 39 of User zuul. Feb 20 04:07:27 localhost python3.9[124722]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:07:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25502 DF PROTO=TCP SPT=52412 DPT=9100 SEQ=4041705378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF174930000000001030307) Feb 20 04:07:28 localhost sshd[124754]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:07:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28777 DF PROTO=TCP SPT=40726 DPT=9105 SEQ=1407043045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF175B20000000001030307) Feb 20 04:07:28 localhost python3.9[124818]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:07:30 localhost python3.9[124914]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:07:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25504 DF PROTO=TCP SPT=52412 DPT=9100 SEQ=4041705378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1808E0000000001030307) Feb 20 04:07:31 localhost python3.9[125005]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:07:32 localhost python3.9[125101]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:07:33 localhost python3.9[125155]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:07:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25505 DF PROTO=TCP SPT=52412 DPT=9100 SEQ=4041705378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1904D0000000001030307) Feb 20 04:07:37 localhost sshd[125172]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:07:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3238 DF PROTO=TCP SPT=35058 DPT=9101 SEQ=372521958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF19A0D0000000001030307) Feb 20 04:07:37 localhost python3.9[125251]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:07:39 localhost python3.9[125398]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:07:39 localhost python3.9[125490]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:07:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60525 DF PROTO=TCP SPT=58116 DPT=9102 SEQ=3805701767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1A3CE0000000001030307) Feb 20 04:07:40 localhost python3.9[125594]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:07:41 localhost python3.9[125642]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:07:41 localhost python3.9[125734]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:07:42 localhost python3.9[125807]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578461.391374-318-251345394797802/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 20 04:07:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41042 DF PROTO=TCP SPT=39390 DPT=9102 SEQ=2999599742 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1B00D0000000001030307) Feb 20 04:07:43 localhost python3.9[125899]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 20 04:07:43 localhost python3.9[125991]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 20 04:07:44 localhost python3.9[126083]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 20 04:07:45 localhost python3.9[126175]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 20 04:07:45 localhost sshd[126176]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:07:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60527 DF PROTO=TCP SPT=58116 DPT=9102 SEQ=3805701767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1BB8D0000000001030307) Feb 20 04:07:46 localhost python3.9[126267]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:07:46 localhost python3.9[126361]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 20 04:07:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33125 DF PROTO=TCP SPT=43266 DPT=9882 SEQ=552695990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1C7CD0000000001030307) Feb 20 04:07:50 localhost python3.9[126455]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 20 04:07:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33126 DF PROTO=TCP SPT=43266 DPT=9882 SEQ=552695990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1D78D0000000001030307) Feb 20 04:07:55 localhost python3.9[126549]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 20 04:07:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46160 DF PROTO=TCP SPT=46300 DPT=9100 SEQ=3253818432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1E9C30000000001030307) Feb 20 04:07:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21673 DF PROTO=TCP SPT=41096 DPT=9105 SEQ=2425340417 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1EAE20000000001030307) Feb 20 04:07:58 localhost auditd[726]: Audit daemon rotating log files Feb 20 04:07:59 localhost python3.9[126649]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 20 04:08:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46162 DF PROTO=TCP SPT=46300 DPT=9100 SEQ=3253818432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF1F5CD0000000001030307) Feb 20 04:08:04 localhost python3.9[126743]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 20 04:08:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46163 DF PROTO=TCP SPT=46300 DPT=9100 SEQ=3253818432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2058D0000000001030307) Feb 20 04:08:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51637 DF PROTO=TCP SPT=36376 DPT=9101 SEQ=2610865891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2100D0000000001030307) Feb 20 04:08:08 localhost python3.9[126899]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 20 04:08:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14206 DF PROTO=TCP SPT=44774 DPT=9102 SEQ=4223709314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2190F0000000001030307) Feb 20 04:08:11 localhost sshd[126931]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:08:12 localhost python3.9[127010]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 20 04:08:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46164 DF PROTO=TCP SPT=46300 DPT=9100 SEQ=3253818432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2260D0000000001030307) Feb 20 04:08:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14208 DF PROTO=TCP SPT=44774 DPT=9102 SEQ=4223709314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF230CD0000000001030307) Feb 20 04:08:16 localhost sshd[127022]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:08:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32585 DF PROTO=TCP SPT=49782 DPT=9882 SEQ=2307890930 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF23D0E0000000001030307) Feb 20 04:08:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32586 DF PROTO=TCP SPT=49782 DPT=9882 SEQ=2307890930 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF24CCD0000000001030307) Feb 20 04:08:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56549 DF PROTO=TCP SPT=50692 DPT=9100 SEQ=1791395887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF25EF30000000001030307) Feb 20 04:08:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48118 DF PROTO=TCP SPT=53914 DPT=9105 SEQ=901655932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF260120000000001030307) Feb 20 04:08:29 localhost sshd[127104]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:08:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56551 DF PROTO=TCP SPT=50692 DPT=9100 SEQ=1791395887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF26B0D0000000001030307) Feb 20 04:08:32 localhost python3.9[127183]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 20 04:08:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56552 DF PROTO=TCP SPT=50692 DPT=9100 SEQ=1791395887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF27ACE0000000001030307) Feb 20 04:08:36 localhost python3.9[127278]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 20 04:08:37 localhost sshd[127281]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:08:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15362 DF PROTO=TCP SPT=39400 DPT=9101 SEQ=2902945978 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2840D0000000001030307) Feb 20 04:08:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51638 DF PROTO=TCP SPT=36376 DPT=9101 SEQ=2610865891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF28E0D0000000001030307) Feb 20 04:08:40 localhost python3.9[127378]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:08:41 localhost python3.9[127483]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:08:41 localhost python3.9[127556]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1771578520.768023-771-161207394008508/.source.json _original_basename=.st9egmou follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:08:42 localhost python3.9[127648]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 20 04:08:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60530 DF PROTO=TCP SPT=58116 DPT=9102 SEQ=3805701767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF29A0D0000000001030307) Feb 20 04:08:42 localhost systemd-journald[48906]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 77.5 (258 of 333 items), suggesting rotation. Feb 20 04:08:42 localhost systemd-journald[48906]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 20 04:08:42 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 04:08:43 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 04:08:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3336 DF PROTO=TCP SPT=44228 DPT=9102 SEQ=2012226303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2A60E0000000001030307) Feb 20 04:08:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32628 DF PROTO=TCP SPT=39906 DPT=9882 SEQ=2910142523 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2B24D0000000001030307) Feb 20 04:08:49 localhost podman[127662]: 2026-02-20 09:08:42.984872352 +0000 UTC m=+0.034995436 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Feb 20 04:08:50 localhost python3.9[127863]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 20 04:08:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32629 DF PROTO=TCP SPT=39906 DPT=9882 SEQ=2910142523 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2C20E0000000001030307) Feb 20 04:08:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13059 DF PROTO=TCP SPT=35546 DPT=9100 SEQ=1842478593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2D4240000000001030307) Feb 20 04:08:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38730 DF PROTO=TCP SPT=55952 DPT=9105 SEQ=3501162826 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2D5420000000001030307) Feb 20 04:08:58 localhost podman[127876]: 2026-02-20 09:08:51.043014776 +0000 UTC m=+0.045246003 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 20 04:09:00 localhost python3.9[128074]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 20 04:09:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13061 DF PROTO=TCP SPT=35546 DPT=9100 SEQ=1842478593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2E00D0000000001030307) Feb 20 04:09:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13062 DF PROTO=TCP SPT=35546 DPT=9100 SEQ=1842478593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2EFCD0000000001030307) Feb 20 04:09:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3289 DF PROTO=TCP SPT=57072 DPT=9101 SEQ=2604091546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF2FA0D0000000001030307) Feb 20 04:09:09 localhost sshd[128256]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:09:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54572 DF PROTO=TCP SPT=44286 DPT=9102 SEQ=1305115232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3034E0000000001030307) Feb 20 04:09:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13063 DF PROTO=TCP SPT=35546 DPT=9100 SEQ=1842478593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3100D0000000001030307) Feb 20 04:09:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54574 DF PROTO=TCP SPT=44286 DPT=9102 SEQ=1305115232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF31B0D0000000001030307) Feb 20 04:09:16 localhost podman[128088]: 2026-02-20 09:09:00.139224002 +0000 UTC m=+0.046463125 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 20 04:09:17 localhost python3.9[129482]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 20 04:09:19 localhost podman[129496]: 2026-02-20 09:09:17.449746379 +0000 UTC m=+0.045661114 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Feb 20 04:09:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48169 DF PROTO=TCP SPT=60266 DPT=9882 SEQ=3216506262 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3278D0000000001030307) Feb 20 04:09:20 localhost python3.9[129673]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 20 04:09:21 localhost podman[129685]: 2026-02-20 09:09:20.257349028 +0000 UTC m=+0.050185066 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:09:22 localhost sshd[129832]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:09:22 localhost python3.9[129853]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 20 04:09:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48170 DF PROTO=TCP SPT=60266 DPT=9882 SEQ=3216506262 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3374D0000000001030307) Feb 20 04:09:24 localhost sshd[129879]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:09:26 localhost podman[129866]: 2026-02-20 09:09:22.827968455 +0000 UTC m=+0.044713029 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Feb 20 04:09:27 localhost python3.9[130043]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 20 04:09:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44522 DF PROTO=TCP SPT=32988 DPT=9100 SEQ=4286601174 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF349540000000001030307) Feb 20 04:09:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44677 DF PROTO=TCP SPT=50922 DPT=9105 SEQ=2651418325 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF34A710000000001030307) Feb 20 04:09:29 localhost podman[130056]: 2026-02-20 09:09:27.409687785 +0000 UTC m=+0.043024614 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Feb 20 04:09:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44524 DF PROTO=TCP SPT=32988 DPT=9100 SEQ=4286601174 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3554D0000000001030307) Feb 20 04:09:32 localhost systemd[1]: session-39.scope: Deactivated successfully. Feb 20 04:09:32 localhost systemd[1]: session-39.scope: Consumed 2min 5.287s CPU time. Feb 20 04:09:32 localhost systemd-logind[760]: Session 39 logged out. Waiting for processes to exit. Feb 20 04:09:32 localhost systemd-logind[760]: Removed session 39. Feb 20 04:09:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44525 DF PROTO=TCP SPT=32988 DPT=9100 SEQ=4286601174 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3650E0000000001030307) Feb 20 04:09:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57290 DF PROTO=TCP SPT=46768 DPT=9101 SEQ=2914692799 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF36E0E0000000001030307) Feb 20 04:09:37 localhost sshd[130167]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:09:37 localhost systemd-logind[760]: New session 40 of user zuul. Feb 20 04:09:37 localhost systemd[1]: Started Session 40 of User zuul. Feb 20 04:09:38 localhost python3.9[130260]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:09:39 localhost python3.9[130356]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None Feb 20 04:09:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59810 DF PROTO=TCP SPT=43898 DPT=9102 SEQ=1437170250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3788D0000000001030307) Feb 20 04:09:40 localhost python3.9[130449]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:09:41 localhost python3.9[130503]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 20 04:09:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3339 DF PROTO=TCP SPT=44228 DPT=9102 SEQ=2012226303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3840E0000000001030307) Feb 20 04:09:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59812 DF PROTO=TCP SPT=43898 DPT=9102 SEQ=1437170250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3904E0000000001030307) Feb 20 04:09:47 localhost python3.9[130597]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:09:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:09:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5073 writes, 22K keys, 5073 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5073 writes, 653 syncs, 7.77 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 04:09:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42180 DF PROTO=TCP SPT=51686 DPT=9882 SEQ=4000140105 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF39C8D0000000001030307) Feb 20 04:09:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:09:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5513 writes, 24K keys, 5513 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5513 writes, 750 syncs, 7.35 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 04:09:51 localhost python3.9[130691]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 20 04:09:52 localhost python3.9[130784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:09:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42181 DF PROTO=TCP SPT=51686 DPT=9882 SEQ=4000140105 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3AC4D0000000001030307) Feb 20 04:09:53 localhost python3.9[130876]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None Feb 20 04:09:55 localhost kernel: SELinux: Converting 2743 SID table entries... Feb 20 04:09:55 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 04:09:55 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 04:09:55 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 04:09:55 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 04:09:55 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 04:09:55 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 04:09:55 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 04:09:56 localhost python3.9[130971]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:09:57 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=18 res=1 Feb 20 04:09:57 localhost python3.9[131069]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:09:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60869 DF PROTO=TCP SPT=35570 DPT=9100 SEQ=3172695538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3BE840000000001030307) Feb 20 04:09:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7426 DF PROTO=TCP SPT=42520 DPT=9105 SEQ=1007743964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3BFA20000000001030307) Feb 20 04:09:59 localhost sshd[131072]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:10:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60871 DF PROTO=TCP SPT=35570 DPT=9100 SEQ=3172695538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3CA8D0000000001030307) Feb 20 04:10:01 localhost python3.9[131165]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:10:03 localhost python3.9[131410]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None Feb 20 04:10:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60872 DF PROTO=TCP SPT=35570 DPT=9100 SEQ=3172695538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3DA4D0000000001030307) Feb 20 04:10:04 localhost python3.9[131500]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:10:05 localhost python3.9[131594]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:10:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13576 DF PROTO=TCP SPT=40406 DPT=9101 SEQ=1098302315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3E40D0000000001030307) Feb 20 04:10:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49248 DF PROTO=TCP SPT=45090 DPT=9102 SEQ=1352189396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3EDCE0000000001030307) Feb 20 04:10:10 localhost python3.9[131688]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:10:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60873 DF PROTO=TCP SPT=35570 DPT=9100 SEQ=3172695538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF3FA0D0000000001030307) Feb 20 04:10:14 localhost python3.9[131782]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 20 04:10:14 localhost systemd[1]: Reloading. Feb 20 04:10:15 localhost systemd-sysv-generator[131812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:10:15 localhost systemd-rc-local-generator[131809]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:10:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:10:15 localhost sshd[131915]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:10:16 localhost python3.9[131914]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:10:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49250 DF PROTO=TCP SPT=45090 DPT=9102 SEQ=1352189396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4058D0000000001030307) Feb 20 04:10:16 localhost python3.9[132008]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:10:17 localhost python3.9[132102]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:10:18 localhost python3.9[132224]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:10:18 localhost python3.9[132357]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:10:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19510 DF PROTO=TCP SPT=47440 DPT=9882 SEQ=4082483990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF411CD0000000001030307) Feb 20 04:10:19 localhost podman[132481]: Feb 20 04:10:19 localhost podman[132481]: 2026-02-20 09:10:19.403844885 +0000 UTC m=+0.052041827 container create a30ab2890089a1652ba135aadd1da0e6f97d23c931ceba7a78cef4b952aeb0e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_nobel, vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, RELEASE=main, ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:10:19 localhost systemd[1]: Started libpod-conmon-a30ab2890089a1652ba135aadd1da0e6f97d23c931ceba7a78cef4b952aeb0e7.scope. Feb 20 04:10:19 localhost systemd[1]: Started libcrun container. Feb 20 04:10:19 localhost podman[132481]: 2026-02-20 09:10:19.473541891 +0000 UTC m=+0.121738863 container init a30ab2890089a1652ba135aadd1da0e6f97d23c931ceba7a78cef4b952aeb0e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_nobel, GIT_BRANCH=main, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-type=git, architecture=x86_64, RELEASE=main, io.buildah.version=1.42.2, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True) Feb 20 04:10:19 localhost podman[132481]: 2026-02-20 09:10:19.381357433 +0000 UTC m=+0.029554385 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:10:19 localhost podman[132481]: 2026-02-20 09:10:19.484687324 +0000 UTC m=+0.132884276 container start a30ab2890089a1652ba135aadd1da0e6f97d23c931ceba7a78cef4b952aeb0e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_nobel, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:10:19 localhost podman[132481]: 2026-02-20 09:10:19.485146347 +0000 UTC m=+0.133343319 container attach a30ab2890089a1652ba135aadd1da0e6f97d23c931ceba7a78cef4b952aeb0e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_nobel, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, RELEASE=main, version=7, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_CLEAN=True, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:10:19 localhost hopeful_nobel[132507]: 167 167 Feb 20 04:10:19 localhost systemd[1]: libpod-a30ab2890089a1652ba135aadd1da0e6f97d23c931ceba7a78cef4b952aeb0e7.scope: Deactivated successfully. Feb 20 04:10:19 localhost podman[132481]: 2026-02-20 09:10:19.488923979 +0000 UTC m=+0.137120951 container died a30ab2890089a1652ba135aadd1da0e6f97d23c931ceba7a78cef4b952aeb0e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_nobel, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, architecture=x86_64, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:10:19 localhost podman[132512]: 2026-02-20 09:10:19.599606901 +0000 UTC m=+0.097251747 container remove a30ab2890089a1652ba135aadd1da0e6f97d23c931ceba7a78cef4b952aeb0e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_nobel, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, CEPH_POINT_RELEASE=, version=7, build-date=2026-02-09T10:25:24Z, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:10:19 localhost python3.9[132505]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578618.4857948-564-194564214941378/.source _original_basename=.87xqcfd6 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:10:19 localhost systemd[1]: libpod-conmon-a30ab2890089a1652ba135aadd1da0e6f97d23c931ceba7a78cef4b952aeb0e7.scope: Deactivated successfully. Feb 20 04:10:19 localhost podman[132548]: Feb 20 04:10:19 localhost podman[132548]: 2026-02-20 09:10:19.832586459 +0000 UTC m=+0.080529302 container create 5b923ce0044f54e0ac6dc84b601de692cefc431ef346f6a5ceefc691cae04514 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_turing, vcs-type=git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, ceph=True, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, version=7, build-date=2026-02-09T10:25:24Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True) Feb 20 04:10:19 localhost systemd[1]: Started libpod-conmon-5b923ce0044f54e0ac6dc84b601de692cefc431ef346f6a5ceefc691cae04514.scope. Feb 20 04:10:19 localhost podman[132548]: 2026-02-20 09:10:19.79991367 +0000 UTC m=+0.047856573 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:10:19 localhost systemd[1]: Started libcrun container. Feb 20 04:10:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f0224467706e5ccc279d45217b0eb44f25a1e21dc8385cb8ec0bc67f46c8bc/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 04:10:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f0224467706e5ccc279d45217b0eb44f25a1e21dc8385cb8ec0bc67f46c8bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 04:10:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f0224467706e5ccc279d45217b0eb44f25a1e21dc8385cb8ec0bc67f46c8bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 04:10:19 localhost podman[132548]: 2026-02-20 09:10:19.912346998 +0000 UTC m=+0.160289841 container init 5b923ce0044f54e0ac6dc84b601de692cefc431ef346f6a5ceefc691cae04514 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_turing, vcs-type=git, description=Red Hat Ceph Storage 7, ceph=True, release=1770267347, version=7, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, GIT_CLEAN=True, GIT_BRANCH=main) Feb 20 04:10:19 localhost podman[132548]: 2026-02-20 09:10:19.923636875 +0000 UTC m=+0.171579708 container start 5b923ce0044f54e0ac6dc84b601de692cefc431ef346f6a5ceefc691cae04514 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_turing, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, io.openshift.tags=rhceph ceph, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z) Feb 20 04:10:19 localhost podman[132548]: 2026-02-20 09:10:19.923943235 +0000 UTC m=+0.171886108 container attach 5b923ce0044f54e0ac6dc84b601de692cefc431ef346f6a5ceefc691cae04514 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_turing, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, ceph=True) Feb 20 04:10:20 localhost python3.9[132646]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:10:20 localhost systemd[1]: var-lib-containers-storage-overlay-00db9c1e9420aaeb2bd95202042876accebabc43a30e9f895d9bad3be92e4e6f-merged.mount: Deactivated successfully. Feb 20 04:10:20 localhost jolly_turing[132596]: [ Feb 20 04:10:20 localhost jolly_turing[132596]: { Feb 20 04:10:20 localhost jolly_turing[132596]: "available": false, Feb 20 04:10:20 localhost jolly_turing[132596]: "ceph_device": false, Feb 20 04:10:20 localhost jolly_turing[132596]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 20 04:10:20 localhost jolly_turing[132596]: "lsm_data": {}, Feb 20 04:10:20 localhost jolly_turing[132596]: "lvs": [], Feb 20 04:10:20 localhost jolly_turing[132596]: "path": "/dev/sr0", Feb 20 04:10:20 localhost jolly_turing[132596]: "rejected_reasons": [ Feb 20 04:10:20 localhost jolly_turing[132596]: "Has a FileSystem", Feb 20 04:10:20 localhost jolly_turing[132596]: "Insufficient space (<5GB)" Feb 20 04:10:20 localhost jolly_turing[132596]: ], Feb 20 04:10:20 localhost jolly_turing[132596]: "sys_api": { Feb 20 04:10:20 localhost jolly_turing[132596]: "actuators": null, Feb 20 04:10:20 localhost jolly_turing[132596]: "device_nodes": "sr0", Feb 20 04:10:20 localhost jolly_turing[132596]: "human_readable_size": "482.00 KB", Feb 20 04:10:20 localhost jolly_turing[132596]: "id_bus": "ata", Feb 20 04:10:20 localhost jolly_turing[132596]: "model": "QEMU DVD-ROM", Feb 20 04:10:20 localhost jolly_turing[132596]: "nr_requests": "2", Feb 20 04:10:20 localhost jolly_turing[132596]: "partitions": {}, Feb 20 04:10:20 localhost jolly_turing[132596]: "path": "/dev/sr0", Feb 20 04:10:20 localhost jolly_turing[132596]: "removable": "1", Feb 20 04:10:20 localhost jolly_turing[132596]: "rev": "2.5+", Feb 20 04:10:20 localhost jolly_turing[132596]: "ro": "0", Feb 20 04:10:20 localhost jolly_turing[132596]: "rotational": "1", Feb 20 04:10:20 localhost jolly_turing[132596]: "sas_address": "", Feb 20 04:10:20 localhost jolly_turing[132596]: "sas_device_handle": "", Feb 20 04:10:20 localhost jolly_turing[132596]: "scheduler_mode": "mq-deadline", Feb 20 04:10:20 localhost jolly_turing[132596]: "sectors": 0, Feb 20 04:10:20 localhost jolly_turing[132596]: "sectorsize": "2048", Feb 20 04:10:20 localhost jolly_turing[132596]: "size": 493568.0, Feb 20 04:10:20 localhost jolly_turing[132596]: "support_discard": "0", Feb 20 04:10:20 localhost jolly_turing[132596]: "type": "disk", Feb 20 04:10:20 localhost jolly_turing[132596]: "vendor": "QEMU" Feb 20 04:10:20 localhost jolly_turing[132596]: } Feb 20 04:10:20 localhost jolly_turing[132596]: } Feb 20 04:10:20 localhost jolly_turing[132596]: ] Feb 20 04:10:20 localhost systemd[1]: libpod-5b923ce0044f54e0ac6dc84b601de692cefc431ef346f6a5ceefc691cae04514.scope: Deactivated successfully. Feb 20 04:10:20 localhost podman[132548]: 2026-02-20 09:10:20.829567502 +0000 UTC m=+1.077510325 container died 5b923ce0044f54e0ac6dc84b601de692cefc431ef346f6a5ceefc691cae04514 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_turing, release=1770267347, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, ceph=True, name=rhceph, GIT_CLEAN=True, io.buildah.version=1.42.2, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:10:20 localhost systemd[1]: tmp-crun.DxxYGv.mount: Deactivated successfully. Feb 20 04:10:20 localhost systemd[1]: var-lib-containers-storage-overlay-21f0224467706e5ccc279d45217b0eb44f25a1e21dc8385cb8ec0bc67f46c8bc-merged.mount: Deactivated successfully. Feb 20 04:10:20 localhost podman[134222]: 2026-02-20 09:10:20.932612885 +0000 UTC m=+0.088523529 container remove 5b923ce0044f54e0ac6dc84b601de692cefc431ef346f6a5ceefc691cae04514 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_turing, version=7, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, vcs-type=git, release=1770267347, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:10:20 localhost systemd[1]: libpod-conmon-5b923ce0044f54e0ac6dc84b601de692cefc431ef346f6a5ceefc691cae04514.scope: Deactivated successfully. Feb 20 04:10:21 localhost python3.9[134269]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={} Feb 20 04:10:21 localhost python3.9[134376]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:10:22 localhost python3.9[134468]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:10:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19511 DF PROTO=TCP SPT=47440 DPT=9882 SEQ=4082483990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4218D0000000001030307) Feb 20 04:10:23 localhost python3.9[134541]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578622.315307-690-140559726314450/.source.yaml _original_basename=.hfsnb4rh follow=False checksum=0cadac3cfc033a4e07cfac59b43f6459e787700a force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:10:24 localhost python3.9[134633]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml Feb 20 04:10:25 localhost ansible-async_wrapper.py[134738]: Invoked with j492707042546 300 /home/zuul/.ansible/tmp/ansible-tmp-1771578624.5442834-762-10015530373019/AnsiballZ_edpm_os_net_config.py _ Feb 20 04:10:25 localhost ansible-async_wrapper.py[134741]: Starting module and watcher Feb 20 04:10:25 localhost ansible-async_wrapper.py[134741]: Start watching 134742 (300) Feb 20 04:10:25 localhost ansible-async_wrapper.py[134742]: Start module (134742) Feb 20 04:10:25 localhost ansible-async_wrapper.py[134738]: Return async_wrapper task started. Feb 20 04:10:25 localhost python3.9[134743]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True remove_config=False safe_defaults=False use_nmstate=False purge_provider= Feb 20 04:10:26 localhost ansible-async_wrapper.py[134742]: Module complete (134742) Feb 20 04:10:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6555 DF PROTO=TCP SPT=47934 DPT=9100 SEQ=523945597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF433B40000000001030307) Feb 20 04:10:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64092 DF PROTO=TCP SPT=48166 DPT=9105 SEQ=56808186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF434D20000000001030307) Feb 20 04:10:29 localhost python3.9[134847]: ansible-ansible.legacy.async_status Invoked with jid=j492707042546.134738 mode=status _async_dir=/root/.ansible_async Feb 20 04:10:30 localhost python3.9[134906]: ansible-ansible.legacy.async_status Invoked with jid=j492707042546.134738 mode=cleanup _async_dir=/root/.ansible_async Feb 20 04:10:30 localhost ansible-async_wrapper.py[134741]: Done in kid B. Feb 20 04:10:30 localhost python3.9[134998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:10:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6557 DF PROTO=TCP SPT=47934 DPT=9100 SEQ=523945597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF43FCD0000000001030307) Feb 20 04:10:31 localhost python3.9[135071]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578630.4393399-828-109589151630436/.source.returncode _original_basename=.38plqu78 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:10:32 localhost python3.9[135163]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:10:32 localhost python3.9[135236]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578631.7265477-876-95002494691645/.source.cfg _original_basename=.0_6vfw2y follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:10:33 localhost python3.9[135328]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:10:33 localhost systemd[1]: Reloading Network Manager... Feb 20 04:10:33 localhost NetworkManager[5967]: [1771578633.6456] audit: op="reload" arg="0" pid=135332 uid=0 result="success" Feb 20 04:10:33 localhost NetworkManager[5967]: [1771578633.6467] config: signal: SIGHUP (no changes from disk) Feb 20 04:10:33 localhost systemd[1]: Reloaded Network Manager. Feb 20 04:10:34 localhost systemd[1]: session-40.scope: Deactivated successfully. Feb 20 04:10:34 localhost systemd[1]: session-40.scope: Consumed 35.580s CPU time. Feb 20 04:10:34 localhost systemd-logind[760]: Session 40 logged out. Waiting for processes to exit. Feb 20 04:10:34 localhost systemd-logind[760]: Removed session 40. Feb 20 04:10:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6558 DF PROTO=TCP SPT=47934 DPT=9100 SEQ=523945597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF44F8D0000000001030307) Feb 20 04:10:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13051 DF PROTO=TCP SPT=52470 DPT=9101 SEQ=558820471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF45A0D0000000001030307) Feb 20 04:10:39 localhost sshd[135347]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:10:39 localhost systemd-logind[760]: New session 41 of user zuul. Feb 20 04:10:39 localhost systemd[1]: Started Session 41 of User zuul. Feb 20 04:10:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13559 DF PROTO=TCP SPT=48206 DPT=9102 SEQ=2967347095 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4630D0000000001030307) Feb 20 04:10:40 localhost python3.9[135440]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:10:41 localhost python3.9[135534]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:10:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64096 DF PROTO=TCP SPT=48166 DPT=9105 SEQ=56808186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4700E0000000001030307) Feb 20 04:10:44 localhost python3.9[135679]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:10:45 localhost systemd-logind[760]: Session 41 logged out. Waiting for processes to exit. Feb 20 04:10:45 localhost systemd[1]: session-41.scope: Deactivated successfully. Feb 20 04:10:45 localhost systemd[1]: session-41.scope: Consumed 1.990s CPU time. Feb 20 04:10:45 localhost systemd-logind[760]: Removed session 41. Feb 20 04:10:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13561 DF PROTO=TCP SPT=48206 DPT=9102 SEQ=2967347095 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF47ACD0000000001030307) Feb 20 04:10:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18067 DF PROTO=TCP SPT=36948 DPT=9882 SEQ=4216660137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4870D0000000001030307) Feb 20 04:10:51 localhost sshd[135695]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:10:51 localhost systemd-logind[760]: New session 42 of user zuul. Feb 20 04:10:51 localhost systemd[1]: Started Session 42 of User zuul. Feb 20 04:10:52 localhost python3.9[135788]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:10:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18068 DF PROTO=TCP SPT=36948 DPT=9882 SEQ=4216660137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF496CD0000000001030307) Feb 20 04:10:53 localhost python3.9[135882]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:10:54 localhost python3.9[135978]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:10:55 localhost python3.9[136032]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:10:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32768 DF PROTO=TCP SPT=41226 DPT=9100 SEQ=660927837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4A8E30000000001030307) Feb 20 04:10:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42241 DF PROTO=TCP SPT=36592 DPT=9105 SEQ=2264134363 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4AA010000000001030307) Feb 20 04:10:59 localhost python3.9[136126]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:11:00 localhost python3.9[136273]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:11:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32770 DF PROTO=TCP SPT=41226 DPT=9100 SEQ=660927837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4B4CD0000000001030307) Feb 20 04:11:01 localhost python3.9[136365]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:11:02 localhost python3.9[136468]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:11:03 localhost python3.9[136516]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:11:03 localhost sshd[136531]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:11:03 localhost python3.9[136609]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:11:04 localhost python3.9[136657]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:11:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32771 DF PROTO=TCP SPT=41226 DPT=9100 SEQ=660927837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4C48D0000000001030307) Feb 20 04:11:05 localhost python3.9[136750]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 20 04:11:06 localhost python3.9[136842]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 20 04:11:06 localhost python3.9[136934]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 20 04:11:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42046 DF PROTO=TCP SPT=53466 DPT=9101 SEQ=3717398088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4CE0E0000000001030307) Feb 20 04:11:07 localhost python3.9[137026]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Feb 20 04:11:08 localhost python3.9[137118]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:11:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29347 DF PROTO=TCP SPT=46060 DPT=9102 SEQ=2663724655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4D80D0000000001030307) Feb 20 04:11:12 localhost python3.9[137212]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:11:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32772 DF PROTO=TCP SPT=41226 DPT=9100 SEQ=660927837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4E40D0000000001030307) Feb 20 04:11:13 localhost python3.9[137306]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:11:14 localhost python3.9[137398]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:11:15 localhost python3.9[137490]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:11:15 localhost python3.9[137583]: ansible-service_facts Invoked Feb 20 04:11:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29349 DF PROTO=TCP SPT=46060 DPT=9102 SEQ=2663724655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4EFCD0000000001030307) Feb 20 04:11:16 localhost network[137600]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 04:11:16 localhost network[137601]: 'network-scripts' will be removed from distribution in near future. Feb 20 04:11:16 localhost network[137602]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 04:11:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:11:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47489 DF PROTO=TCP SPT=32866 DPT=9882 SEQ=2853552320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF4FC4D0000000001030307) Feb 20 04:11:19 localhost sshd[137727]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:11:22 localhost python3.9[137989]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:11:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47490 DF PROTO=TCP SPT=32866 DPT=9882 SEQ=2853552320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF50C0D0000000001030307) Feb 20 04:11:24 localhost sshd[137992]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:11:25 localhost sshd[137994]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:11:26 localhost sshd[138025]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:11:27 localhost python3.9[138104]: ansible-package_facts Invoked with manager=['auto'] strategy=first Feb 20 04:11:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61706 DF PROTO=TCP SPT=40962 DPT=9100 SEQ=742361316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF51E130000000001030307) Feb 20 04:11:27 localhost sshd[138119]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:11:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38335 DF PROTO=TCP SPT=46528 DPT=9105 SEQ=2423797748 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF51F320000000001030307) Feb 20 04:11:28 localhost python3.9[138198]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:11:29 localhost python3.9[138273]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578688.1951969-651-62514705912884/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:11:30 localhost python3.9[138367]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:11:30 localhost python3.9[138442]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578689.7088504-696-212652623597866/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:11:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61708 DF PROTO=TCP SPT=40962 DPT=9100 SEQ=742361316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF52A0D0000000001030307) Feb 20 04:11:32 localhost python3.9[138536]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:11:33 localhost python3.9[138630]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:11:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61709 DF PROTO=TCP SPT=40962 DPT=9100 SEQ=742361316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF539CD0000000001030307) Feb 20 04:11:35 localhost python3.9[138684]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:11:37 localhost python3.9[138778]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:11:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13626 DF PROTO=TCP SPT=35708 DPT=9101 SEQ=2441916399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF5440E0000000001030307) Feb 20 04:11:38 localhost python3.9[138832]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:11:38 localhost chronyd[26327]: chronyd exiting Feb 20 04:11:38 localhost systemd[1]: Stopping NTP client/server... Feb 20 04:11:38 localhost systemd[1]: chronyd.service: Deactivated successfully. Feb 20 04:11:38 localhost systemd[1]: Stopped NTP client/server. Feb 20 04:11:38 localhost systemd[1]: Starting NTP client/server... Feb 20 04:11:38 localhost chronyd[138840]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 20 04:11:38 localhost chronyd[138840]: Frequency -26.386 +/- 0.501 ppm read from /var/lib/chrony/drift Feb 20 04:11:38 localhost chronyd[138840]: Loaded seccomp filter (level 2) Feb 20 04:11:38 localhost systemd[1]: Started NTP client/server. Feb 20 04:11:38 localhost systemd-logind[760]: Session 42 logged out. Waiting for processes to exit. Feb 20 04:11:38 localhost systemd[1]: session-42.scope: Deactivated successfully. Feb 20 04:11:38 localhost systemd[1]: session-42.scope: Consumed 27.975s CPU time. Feb 20 04:11:38 localhost systemd-logind[760]: Removed session 42. Feb 20 04:11:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38577 DF PROTO=TCP SPT=55110 DPT=9102 SEQ=2831377248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF54D4D0000000001030307) Feb 20 04:11:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13564 DF PROTO=TCP SPT=48206 DPT=9102 SEQ=2967347095 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF55A0D0000000001030307) Feb 20 04:11:43 localhost sshd[138856]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:11:44 localhost sshd[138858]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:11:44 localhost systemd-logind[760]: New session 43 of user zuul. Feb 20 04:11:44 localhost systemd[1]: Started Session 43 of User zuul. Feb 20 04:11:45 localhost python3.9[138951]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:11:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38579 DF PROTO=TCP SPT=55110 DPT=9102 SEQ=2831377248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF5650D0000000001030307) Feb 20 04:11:46 localhost python3.9[139047]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:11:47 localhost python3.9[139152]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:11:47 localhost python3.9[139200]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.qx4_69fw recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:11:48 localhost python3.9[139292]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:11:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48893 DF PROTO=TCP SPT=50148 DPT=9882 SEQ=1212579448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF5714E0000000001030307) Feb 20 04:11:49 localhost python3.9[139367]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578708.446646-138-10526080852958/.source _original_basename=.sbxftb2g follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:11:50 localhost python3.9[139459]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:11:50 localhost python3.9[139551]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:11:51 localhost python3.9[139624]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578710.50283-210-6388025242416/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 20 04:11:52 localhost python3.9[139716]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:11:52 localhost python3.9[139789]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578711.5926008-210-118693453200213/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 20 04:11:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48894 DF PROTO=TCP SPT=50148 DPT=9882 SEQ=1212579448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF5810E0000000001030307) Feb 20 04:11:53 localhost python3.9[139881]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:11:53 localhost python3.9[139973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:11:54 localhost python3.9[140046]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578713.5289435-321-75967563278419/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:11:55 localhost python3.9[140138]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:11:55 localhost python3.9[140211]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578714.7641094-366-256401809183482/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:11:56 localhost python3.9[140303]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:11:56 localhost systemd[1]: Reloading. Feb 20 04:11:57 localhost systemd-sysv-generator[140329]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:11:57 localhost systemd-rc-local-generator[140324]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:11:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:11:57 localhost systemd[1]: Reloading. Feb 20 04:11:57 localhost systemd-rc-local-generator[140365]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:11:57 localhost systemd-sysv-generator[140370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:11:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:11:57 localhost systemd[1]: Starting EDPM Container Shutdown... Feb 20 04:11:57 localhost systemd[1]: Finished EDPM Container Shutdown. Feb 20 04:11:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43712 DF PROTO=TCP SPT=56874 DPT=9100 SEQ=2002253094 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF593430000000001030307) Feb 20 04:11:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16261 DF PROTO=TCP SPT=33906 DPT=9105 SEQ=2524537037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF594620000000001030307) Feb 20 04:11:58 localhost python3.9[140471]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:11:58 localhost python3.9[140544]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578717.7810276-435-109228864118030/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:11:59 localhost python3.9[140636]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:11:59 localhost python3.9[140709]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578719.0495694-480-166075519548683/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:00 localhost python3.9[140801]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:12:00 localhost systemd[1]: Reloading. Feb 20 04:12:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43714 DF PROTO=TCP SPT=56874 DPT=9100 SEQ=2002253094 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF59F4E0000000001030307) Feb 20 04:12:00 localhost systemd-sysv-generator[140827]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:12:00 localhost systemd-rc-local-generator[140823]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:12:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:12:01 localhost systemd[1]: Starting Create netns directory... Feb 20 04:12:01 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 20 04:12:01 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 20 04:12:01 localhost systemd[1]: Finished Create netns directory. Feb 20 04:12:02 localhost python3.9[140932]: ansible-ansible.builtin.service_facts Invoked Feb 20 04:12:03 localhost network[140949]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 04:12:03 localhost network[140950]: 'network-scripts' will be removed from distribution in near future. Feb 20 04:12:03 localhost network[140951]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 04:12:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:12:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43715 DF PROTO=TCP SPT=56874 DPT=9100 SEQ=2002253094 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF5AF0D0000000001030307) Feb 20 04:12:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22878 DF PROTO=TCP SPT=47302 DPT=9101 SEQ=3492610890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF5B80E0000000001030307) Feb 20 04:12:07 localhost python3.9[141153]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:12:08 localhost python3.9[141228]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578727.5442483-603-218509059583993/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:09 localhost python3.9[141321]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:12:09 localhost systemd[1]: Reloading OpenSSH server daemon... Feb 20 04:12:09 localhost systemd[1]: Reloaded OpenSSH server daemon. Feb 20 04:12:09 localhost sshd[120181]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:12:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16535 DF PROTO=TCP SPT=40622 DPT=9102 SEQ=1498887237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF5C28D0000000001030307) Feb 20 04:12:10 localhost python3.9[141417]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:10 localhost python3.9[141509]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:12:11 localhost python3.9[141582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578730.4198747-696-247686081397112/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:12 localhost python3.9[141674]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Feb 20 04:12:12 localhost systemd[1]: Starting Time & Date Service... Feb 20 04:12:12 localhost systemd[1]: Started Time & Date Service. Feb 20 04:12:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29352 DF PROTO=TCP SPT=46060 DPT=9102 SEQ=2663724655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF5CE0E0000000001030307) Feb 20 04:12:13 localhost python3.9[141770]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:13 localhost python3.9[141862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:12:14 localhost python3.9[141935]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578733.4717956-801-200000248766216/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:15 localhost python3.9[142027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:12:15 localhost python3.9[142100]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578734.6387737-846-43096833878988/.source.yaml _original_basename=.cogkjmvc follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16537 DF PROTO=TCP SPT=40622 DPT=9102 SEQ=1498887237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF5DA4D0000000001030307) Feb 20 04:12:16 localhost python3.9[142192]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:12:16 localhost python3.9[142267]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578735.8774672-891-174619197969913/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:17 localhost python3.9[142359]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:12:18 localhost python3.9[142452]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:12:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58556 DF PROTO=TCP SPT=45980 DPT=9882 SEQ=703448345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF5E68D0000000001030307) Feb 20 04:12:19 localhost python3[142545]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Feb 20 04:12:20 localhost python3.9[142637]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:12:20 localhost python3.9[142710]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578739.9758713-1008-160857806574836/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:21 localhost python3.9[142802]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:12:22 localhost python3.9[142875]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578741.500219-1053-227891412993695/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:23 localhost python3.9[142967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:12:23 localhost python3.9[143040]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578742.7097704-1098-280012356305692/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16538 DF PROTO=TCP SPT=40622 DPT=9102 SEQ=1498887237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF5FA0D0000000001030307) Feb 20 04:12:24 localhost python3.9[143132]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:12:24 localhost python3.9[143205]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578743.8548894-1143-48072966081804/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:25 localhost python3.9[143297]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:12:26 localhost python3.9[143370]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578745.0623248-1188-2195228714657/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:26 localhost python3.9[143537]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:27 localhost python3.9[143667]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:12:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14810 DF PROTO=TCP SPT=35412 DPT=9100 SEQ=177945587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF608730000000001030307) Feb 20 04:12:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33346 DF PROTO=TCP SPT=60734 DPT=9105 SEQ=4081428205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF609910000000001030307) Feb 20 04:12:28 localhost python3.9[143762]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:29 localhost python3.9[143870]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:29 localhost python3.9[143962]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:30 localhost python3.9[144054]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Feb 20 04:12:31 localhost python3.9[144147]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Feb 20 04:12:31 localhost systemd[1]: session-43.scope: Deactivated successfully. Feb 20 04:12:31 localhost systemd[1]: session-43.scope: Consumed 27.600s CPU time. Feb 20 04:12:31 localhost systemd-logind[760]: Session 43 logged out. Waiting for processes to exit. Feb 20 04:12:31 localhost systemd-logind[760]: Removed session 43. Feb 20 04:12:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61712 DF PROTO=TCP SPT=40962 DPT=9100 SEQ=742361316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6180D0000000001030307) Feb 20 04:12:33 localhost sshd[144163]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:12:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48774 DF PROTO=TCP SPT=37422 DPT=9101 SEQ=4289449140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF627090000000001030307) Feb 20 04:12:36 localhost sshd[144165]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:12:37 localhost systemd-logind[760]: New session 44 of user zuul. Feb 20 04:12:37 localhost systemd[1]: Started Session 44 of User zuul. Feb 20 04:12:37 localhost python3.9[144260]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None Feb 20 04:12:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52864 DF PROTO=TCP SPT=48302 DPT=9102 SEQ=1905163355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF633AD0000000001030307) Feb 20 04:12:39 localhost python3.9[144352]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:12:40 localhost python3.9[144446]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts Feb 20 04:12:41 localhost python3.9[144538]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.gi1id5l1 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:12:42 localhost python3.9[144613]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.gi1id5l1 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578761.4123328-189-275124798014854/.source.gi1id5l1 _original_basename=.rupnkh5i follow=False checksum=831757da1f03f9732785943fa2a05c0d9424aa2f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:42 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Feb 20 04:12:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38582 DF PROTO=TCP SPT=55110 DPT=9102 SEQ=2831377248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6440D0000000001030307) Feb 20 04:12:44 localhost python3.9[144707]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:12:45 localhost sshd[144754]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:12:45 localhost sshd[144801]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:12:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18104 DF PROTO=TCP SPT=50238 DPT=9882 SEQ=1411262527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF64FC10000000001030307) Feb 20 04:12:46 localhost python3.9[144802]: ansible-ansible.builtin.blockinfile Invoked with block=np0005625201.localdomain,192.168.122.105,np0005625201* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyGkX26ECIsvqnvJegedSF6KicDAAqjaifawEd//OuK9zdHIWqO3XmlEszZqWPsdQhPFkelfzXR+sy3gbPNv+yjT7phsw1sq7zHXeogQFlP5iOQZrf6hCnfXxVk2ckIXMT0UJVZ8FCTwsQi+HKkR/IEj08pR7EjrXGWxHkjv5wNj76spF3FJxtwycS4+KzY3UFy7gYWVn2jB0ha966YgjHMPhzQnT33W9myxGH33M1L5ZCGlfH19hLnqTUNMfzIfw3afxHkL5BFZbhthUPmIfLdLtKmZEkpSTBO/CrNA6CmMfY6xnT78hmwXytEQ+jeiRdKXdr9xQ2j6wVmPzckFKBsBYRe4DprKGt93fnKS9Z6A3Sv626DyZgDa8/NXbtAaBxtyix5Vdt872hYvCzYyB/OuSV6PR5DOq8z3fquOwgtka3rA6qL5gxhFJcO5TqtBM76DzOLd9OLM9bIO1yK9sCmbYynMojkXylzhDfcI8kytS5xs9FJEfwTElZRHkEIQE=#012np0005625201.localdomain,192.168.122.105,np0005625201* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINiFV2XLGVf9PGXF0NE4rbupw+vH23sDv10vB3wGrrmN#012np0005625201.localdomain,192.168.122.105,np0005625201* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM/mxytSzwSYcezRRSD4AjPi1j6Bxso/MLXC/NAewzvKThRznoUobc02vzGaO4FrwuZIZ/YHJyAHrQRbtdSPUTU=#012np0005625202.localdomain,192.168.122.106,np0005625202* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDr8sejencX7nSCX6AegGtTuiZL3yclu/L7ZVN4B6dKPdmHqVr33QJD40sEk28GHpx8BrkPU2Qj1de9H6mGtrlwhmJr7Pccg/YqzKoTCQD5rZQ4youU8H70As6YX5ZlXyulwI1SH70XjMm37x4ptKALFOjRnHg0WIXah/tAmzrY/orh+/eCcns7APVjN9B1o+MqP4r47WrWrGU/KxtsHc6dflWxZW7BWUCCNS0e3C4yWLRjy8Hhj7Qkpssv/UBcj+olVHadUUOYiaQZ5Y33MjxwIg8o1MuC7C1dNIn8eXOXXiA8jd/lJd9kImrCGUtkVqj8VQgsMh4vRYMD+0SNLYRDVwxdemOzJYgwQhgiWZ0G+cVhnTBpMmXyIws2OpOKU8R3HjTC3jz+BxvjwEvMDoQfpGgsHB9NCXnkQzs2F8EA8LpA823Ef1SMgPdDCaQzvN5oQPZkWAPMVHvq31xpN9q+KXg/bg0uDaIZXUxW2rGnem7pFS78rRUGL6MfSMn1zs=#012np0005625202.localdomain,192.168.122.106,np0005625202* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHIvGY3AHSeC6TXoQUOT+qZPpfcpbcCaqWpewY2PaUdr#012np0005625202.localdomain,192.168.122.106,np0005625202* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhJMOoHTPuI+cufoglj5k5xopCSTjiletXnoJ15KnCBclkNCXy9DqMn/ZeknN3AqFVQZhJfknnRkCXvgtRg7lc=#012np0005625200.localdomain,192.168.122.104,np0005625200* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDW88346W6zU6nxCpqapHtIr5nRG8Jn9LFit3r5klBfauCkmAGONb4X8IwKjo8MD9etebUVbo6aX9gBMBMSs7bSoHzsEQuMLpBDrweSbahQj+gqZ5TmQ/xvwbhws04z3/IJxapAk2xWu7khVGjvOPUE1CROkP+1LiGktQ6Xj1ar1TbLNud2Dq/R5ZalbpK0OT3+no3x0oAJT3W649tW4nmCWcNaxykPsLREsUlH2qVoceAzLEDCSde9/1TONc/URyB4acVqmEwJDHeX51bh31tpQwp/WSe0vKQ6eUw63Tmpn+dRI9xbnFhc6mgGAPcEw7cAUkM7oM6bYMSvVxYDmzMhuXUU/9i3mdMnDBkMyZ5Oed6ZSmFQIJe5k7cz3783d35ZXfl/HsYMqoZ3lmDgbeS59pQrI+BldKyv3sTnoCDahfcmzmiHssxqa7tT5KOuR444q7Nj6wJEIZMEEJEHtMlh1iSBRJZOEOaKjo7h+jV7KMe75aPRasvu9K1v0dqyG6U=#012np0005625200.localdomain,192.168.122.104,np0005625200* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKZd4BJQ7FPHukFUlQ3fRSVsRqMpZA9FFzC98e6Nz+hC#012np0005625200.localdomain,192.168.122.104,np0005625200* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJgelHBDBResuC/7QDQA12qTpLPW1xHX6eUvY/QfQ0s1DYziYEKuSHQhUQMzxPcUq9IVVPnxkoRvZdWPxsh2Cmk=#012np0005625199.localdomain,192.168.122.103,np0005625199* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrnsozeOPJKYg9sx2Tj6QOLRhujK5RVh5RZQ3sb0pk+DbWHQKqS1YvJUg2hV4WxbxPnNUCBtJ+RZ8lVm6RLM+hc3ffe2sOMOz5upO/hTlIpBSfJpQORkiNW+XIXdDVxgE418veFd2hASFmiCmKoFSKXsvnmFU9oTEpja1plcXSqCobFMVYKlhcRo66O0ySlGOR+o3Ar2yNJQjFErEGvZLoDEa/VlA6zreYmTaIsnlUDie0gbm5teTlsCcEYkvWcTzcfOEX2kXQRQbS5qlPtGg7c+KMv5e40rE+2QOigLmOOPVGwNYuLuhb/EHT0C8hK8otW4tiXxBlSZ5ONKY6YYQOpy7krNkWRxNXzK0LfXo2bt2apDaMzebPOvuBj1YyBiLpa6/aLvS/dtGolQNPDpFivPbP/mSpat1qTs0W3/2HyBovwWSGJDW8MMYxbZJ0Z6tnuOwdrPTdkhIibfW9wxgL7EHrDYrGx5CvA2vUM4KDKRntz/cCMGE/zKacSJ48nNk=#012np0005625199.localdomain,192.168.122.103,np0005625199* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIENpQQgr9IVl8UWbQ9CANzH6ET+G2aHJkzVgu9ObE0o0#012np0005625199.localdomain,192.168.122.103,np0005625199* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJUcn4Y73wlRXKxRegM8lRt5GQ//hAORn8IqrcrC5ZJyjHCZmp+wutQeuPqPsTK4OVK+uH/93l/3Av8AKvpXG3A=#012np0005625203.localdomain,192.168.122.107,np0005625203* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtf1NXQ3EGQGdpLLLxuODKBdTGwqsiHL2QZ6zcfpGAa7EhDIxuEcLboqOGjQO0FM3u+kl2gIgKF0UsY5Vjcv4mDCMp7A7srq7TVo5lE5cCppbbXr0/PH2L/naHU3W+W83aT5RE17XPJ0Acn3W51WFBoICCCc4jjWTGmkNEgurKBJmdr0n8NeIcUWZ7Abrs/N2xzNftEFIjAPwebxgEwgCx0hMbdjTFhKbB/V7CjKaCU/UjirWMW5aDQJQEfrCM9u4NHuGaWKzJgar4/shNHaRvkCDbVrRPTCyfNebE04J/R42X3yWmvww4TMZVpRROd/u6Pgg1P2tbPGfQ0XvS0rfY6W4/VnHcyRDqxILH5BoeCAbTuVFmR0hbQu9fNbNxTP+o+na9mHEbNxbhcREnkal8+M0l11YftCRkr4132JITxe7y93gN/dwxE3nJLHLXRuRskWc3GTDT2MVU2Sj64yizD9KOM3oiMBXdPbNbgZywu3hqQvpO00GVg6QRjEJoiFc=#012np0005625203.localdomain,192.168.122.107,np0005625203* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPIEBJz4VBziYqCcr9UT9NnbvRxFLoAcnVJLavCpXqHm#012np0005625203.localdomain,192.168.122.107,np0005625203* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM9k0T2/IFyFrBAAoi3QqwBKC9bi/bemQO6MNZhrO12MSG3WZcjS1bhOFPw5LuM+f11BFCm5wNyBNY/QmALZTgE=#012np0005625204.localdomain,192.168.122.108,np0005625204* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAo6exxFtNk/Y5qEGYenJyhnCsS7iZmCGsFaQtJElNSeTTX9a1P0P2EmjtHolRxnljCZ2X8HgWx/irhJvWLoS+dzF5l+KcyQy83+048h51mbnj7zV2uG9i8LkO0egs1uBBp5E+hauHMsuf0nIDFl45W86ZXuf+MfFEKCInhjB5gfE9tTjwmKwKhgO1DE7Vpx3OYy1FHkq0YDBCqQHuuhYPrLZPjfVv3vGOaHH/XCsxX3h8/ixsZbobD56dDBKF/8CFyC/guH8pNUhZHG0dEhz5BT8PcE2Q/M9pPttzmRQksfg9+q7lVy9eCoOVpzqfTgjE1cm5yISwuMZzaNxwjJKB54EWpfl5xxnkC14B+xdvowxpl1PcMNZ0q1fWofJF4TrJAwWCUYZf45aUV2yb5R8WavUT0pX32xmd4zFbXusoafiw2FcgnxoGz3N4ZgIxTPPmgUe13blr1SK44huXWPioaolFBo82xVVFHc+01vfLF3xvs86d6EpqpLH+yaCeUjE=#012np0005625204.localdomain,192.168.122.108,np0005625204* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDTY+/nqIDkr9+7jl3LUu4apuQeFzQYkXiSihEezHlEw#012np0005625204.localdomain,192.168.122.108,np0005625204* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPuq/q6JwPgXzS/TgJ6dhP0gZvq89Vk1r9Ou051lEnMdt+NHYUjJx2Tv1oS9A+wQXivor03/iqWU5nj5QHdvHx4=#012 create=True mode=0644 path=/tmp/ansible.gi1id5l1 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:47 localhost python3.9[144895]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.gi1id5l1' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:12:48 localhost python3.9[144989]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.gi1id5l1 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:12:49 localhost systemd[1]: session-44.scope: Deactivated successfully. Feb 20 04:12:49 localhost systemd[1]: session-44.scope: Consumed 4.063s CPU time. Feb 20 04:12:49 localhost systemd-logind[760]: Session 44 logged out. Waiting for processes to exit. Feb 20 04:12:49 localhost systemd-logind[760]: Removed session 44. Feb 20 04:12:50 localhost sshd[145004]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:12:54 localhost sshd[145006]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:12:55 localhost sshd[145008]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:12:55 localhost systemd-logind[760]: New session 45 of user zuul. Feb 20 04:12:55 localhost systemd[1]: Started Session 45 of User zuul. Feb 20 04:12:56 localhost python3.9[145101]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:12:56 localhost sshd[145152]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:12:57 localhost sshd[145153]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:12:57 localhost python3.9[145199]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 20 04:12:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7834 DF PROTO=TCP SPT=40642 DPT=9100 SEQ=345040290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF67DA40000000001030307) Feb 20 04:12:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42645 DF PROTO=TCP SPT=47716 DPT=9105 SEQ=2122136426 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF67EC20000000001030307) Feb 20 04:12:59 localhost python3.9[145293]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:13:01 localhost python3.9[145386]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:13:02 localhost python3.9[145479]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:13:02 localhost python3.9[145573]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:13:03 localhost python3.9[145668]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:03 localhost systemd[1]: session-45.scope: Deactivated successfully. Feb 20 04:13:03 localhost systemd[1]: session-45.scope: Consumed 3.877s CPU time. Feb 20 04:13:03 localhost systemd-logind[760]: Session 45 logged out. Waiting for processes to exit. Feb 20 04:13:03 localhost systemd-logind[760]: Removed session 45. Feb 20 04:13:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60952 DF PROTO=TCP SPT=35140 DPT=9101 SEQ=557924814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF69C390000000001030307) Feb 20 04:13:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60953 DF PROTO=TCP SPT=35140 DPT=9101 SEQ=557924814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6A04D0000000001030307) Feb 20 04:13:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60954 DF PROTO=TCP SPT=35140 DPT=9101 SEQ=557924814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6A84D0000000001030307) Feb 20 04:13:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1384 DF PROTO=TCP SPT=41274 DPT=9102 SEQ=1310697318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6A8DF0000000001030307) Feb 20 04:13:09 localhost sshd[145684]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:13:09 localhost systemd-logind[760]: New session 46 of user zuul. Feb 20 04:13:09 localhost systemd[1]: Started Session 46 of User zuul. Feb 20 04:13:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1385 DF PROTO=TCP SPT=41274 DPT=9102 SEQ=1310697318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6ACCD0000000001030307) Feb 20 04:13:10 localhost python3.9[145777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:13:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1386 DF PROTO=TCP SPT=41274 DPT=9102 SEQ=1310697318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6B4CD0000000001030307) Feb 20 04:13:12 localhost python3.9[145873]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:13:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60955 DF PROTO=TCP SPT=35140 DPT=9101 SEQ=557924814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6B80D0000000001030307) Feb 20 04:13:13 localhost python3.9[145927]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Feb 20 04:13:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1387 DF PROTO=TCP SPT=41274 DPT=9102 SEQ=1310697318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6C48D0000000001030307) Feb 20 04:13:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12772 DF PROTO=TCP SPT=49012 DPT=9882 SEQ=1029856043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6C4F00000000001030307) Feb 20 04:13:17 localhost python3.9[146019]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:13:18 localhost python3.9[146112]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12774 DF PROTO=TCP SPT=49012 DPT=9882 SEQ=1029856043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6D10D0000000001030307) Feb 20 04:13:19 localhost python3.9[146204]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:19 localhost python3.9[146296]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012 * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:20 localhost python3.9[146386]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 20 04:13:21 localhost python3.9[146476]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:13:22 localhost python3.9[146568]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:13:22 localhost systemd[1]: session-46.scope: Deactivated successfully. Feb 20 04:13:22 localhost systemd[1]: session-46.scope: Consumed 8.850s CPU time. Feb 20 04:13:22 localhost systemd-logind[760]: Session 46 logged out. Waiting for processes to exit. Feb 20 04:13:22 localhost systemd-logind[760]: Removed session 46. Feb 20 04:13:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12775 DF PROTO=TCP SPT=49012 DPT=9882 SEQ=1029856043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6E0CD0000000001030307) Feb 20 04:13:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4500 DF PROTO=TCP SPT=34870 DPT=9100 SEQ=1197222464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6F2D30000000001030307) Feb 20 04:13:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56389 DF PROTO=TCP SPT=40056 DPT=9105 SEQ=1507239910 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6F3F20000000001030307) Feb 20 04:13:28 localhost sshd[146585]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:13:28 localhost systemd-logind[760]: New session 47 of user zuul. Feb 20 04:13:28 localhost systemd[1]: Started Session 47 of User zuul. Feb 20 04:13:29 localhost python3.9[146734]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:13:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4502 DF PROTO=TCP SPT=34870 DPT=9100 SEQ=1197222464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF6FECD0000000001030307) Feb 20 04:13:31 localhost python3.9[146850]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:13:32 localhost python3.9[146942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:13:33 localhost python3.9[147015]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578812.024687-177-166269650609537/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:33 localhost python3.9[147107]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:13:34 localhost python3.9[147199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:13:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4503 DF PROTO=TCP SPT=34870 DPT=9100 SEQ=1197222464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF70E8D0000000001030307) Feb 20 04:13:35 localhost python3.9[147272]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578814.0378313-251-245288362150440/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:35 localhost python3.9[147364]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:13:36 localhost python3.9[147456]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:13:36 localhost python3.9[147529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578815.8714974-322-7177325925732/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60957 DF PROTO=TCP SPT=35140 DPT=9101 SEQ=557924814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7180D0000000001030307) Feb 20 04:13:37 localhost python3.9[147621]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:13:38 localhost python3.9[147713]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:13:38 localhost python3.9[147786]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578817.6588235-392-159749171631902/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:39 localhost python3.9[147878]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:13:39 localhost python3.9[147970]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:13:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62678 DF PROTO=TCP SPT=53272 DPT=9102 SEQ=4041240428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7220D0000000001030307) Feb 20 04:13:40 localhost python3.9[148043]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578819.4202611-464-162801740103814/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:41 localhost python3.9[148135]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:13:41 localhost python3.9[148227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:13:42 localhost python3.9[148300]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578821.2441413-538-85133067464319/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:42 localhost python3.9[148392]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:13:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4504 DF PROTO=TCP SPT=34870 DPT=9100 SEQ=1197222464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF72E0D0000000001030307) Feb 20 04:13:43 localhost python3.9[148484]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:13:43 localhost python3.9[148557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578823.0147755-608-243537052139944/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:43 localhost sshd[148559]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:13:44 localhost python3.9[148651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:13:45 localhost python3.9[148743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:13:45 localhost python3.9[148816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578824.710896-678-161613131320196/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62680 DF PROTO=TCP SPT=53272 DPT=9102 SEQ=4041240428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF739CD0000000001030307) Feb 20 04:13:46 localhost systemd-logind[760]: Session 47 logged out. Waiting for processes to exit. Feb 20 04:13:46 localhost systemd[1]: session-47.scope: Deactivated successfully. Feb 20 04:13:46 localhost systemd[1]: session-47.scope: Consumed 11.379s CPU time. Feb 20 04:13:46 localhost systemd-logind[760]: Removed session 47. Feb 20 04:13:48 localhost chronyd[138840]: Selected source 216.128.178.20 (pool.ntp.org) Feb 20 04:13:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54752 DF PROTO=TCP SPT=42420 DPT=9882 SEQ=3099350123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7460E0000000001030307) Feb 20 04:13:51 localhost sshd[148831]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:13:51 localhost systemd-logind[760]: New session 48 of user zuul. Feb 20 04:13:51 localhost systemd[1]: Started Session 48 of User zuul. Feb 20 04:13:52 localhost python3.9[148926]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54753 DF PROTO=TCP SPT=42420 DPT=9882 SEQ=3099350123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF755CD0000000001030307) Feb 20 04:13:53 localhost python3.9[149018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:13:54 localhost python3.9[149091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578832.8129773-57-198855010717262/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=8e2004121a34320613d32710ae37702da8d027e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:54 localhost python3.9[149183]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:13:55 localhost python3.9[149256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578834.2310936-57-104873469374970/.source.conf _original_basename=ceph.conf follow=False checksum=936d449f31af670125791fe297b02d275b2ba4b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:13:55 localhost systemd[1]: session-48.scope: Deactivated successfully. Feb 20 04:13:55 localhost systemd[1]: session-48.scope: Consumed 2.238s CPU time. Feb 20 04:13:55 localhost systemd-logind[760]: Session 48 logged out. Waiting for processes to exit. Feb 20 04:13:55 localhost systemd-logind[760]: Removed session 48. Feb 20 04:13:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35199 DF PROTO=TCP SPT=41396 DPT=9100 SEQ=3499628868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF768030000000001030307) Feb 20 04:13:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19251 DF PROTO=TCP SPT=45460 DPT=9105 SEQ=862970179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF769210000000001030307) Feb 20 04:14:00 localhost sshd[149271]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:14:00 localhost systemd-logind[760]: New session 49 of user zuul. Feb 20 04:14:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35201 DF PROTO=TCP SPT=41396 DPT=9100 SEQ=3499628868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7740E0000000001030307) Feb 20 04:14:00 localhost systemd[1]: Started Session 49 of User zuul. Feb 20 04:14:01 localhost python3.9[149364]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:14:03 localhost python3.9[149460]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:14:03 localhost python3.9[149552]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 20 04:14:04 localhost python3.9[149642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:14:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35202 DF PROTO=TCP SPT=41396 DPT=9100 SEQ=3499628868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF783CD0000000001030307) Feb 20 04:14:05 localhost python3.9[149734]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Feb 20 04:14:06 localhost python3.9[149826]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:14:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52799 DF PROTO=TCP SPT=55592 DPT=9101 SEQ=2427456803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF78E0D0000000001030307) Feb 20 04:14:07 localhost python3.9[149880]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:14:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29043 DF PROTO=TCP SPT=42960 DPT=9102 SEQ=2626228502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7974D0000000001030307) Feb 20 04:14:12 localhost python3.9[149974]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 20 04:14:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35203 DF PROTO=TCP SPT=41396 DPT=9100 SEQ=3499628868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7A40D0000000001030307) Feb 20 04:14:13 localhost python3[150069]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012 rule:#012 proto: udp#012 dport: 4789#012- rule_name: 119 neutron geneve networks#012 rule:#012 proto: udp#012 dport: 6081#012 state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: OUTPUT#012 jump: NOTRACK#012 action: append#012 state: []#012- rule_name: 121 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: PREROUTING#012 jump: NOTRACK#012 action: append#012 state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present Feb 20 04:14:14 localhost python3.9[150161]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:14 localhost python3.9[150253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:15 localhost python3.9[150301]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:15 localhost python3.9[150393]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29045 DF PROTO=TCP SPT=42960 DPT=9102 SEQ=2626228502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7AF0E0000000001030307) Feb 20 04:14:16 localhost python3.9[150441]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.fyoghte9 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:16 localhost python3.9[150533]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:17 localhost python3.9[150581]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21191 DF PROTO=TCP SPT=60622 DPT=9882 SEQ=2000339313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7BB4D0000000001030307) Feb 20 04:14:19 localhost python3.9[150673]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:14:20 localhost python3[150766]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Feb 20 04:14:20 localhost python3.9[150858]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:21 localhost python3.9[150933]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578860.284435-426-47695195484437/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:22 localhost python3.9[151025]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:22 localhost sshd[151068]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:14:22 localhost python3.9[151102]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578861.586058-471-222491025294992/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21192 DF PROTO=TCP SPT=60622 DPT=9882 SEQ=2000339313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7CB0D0000000001030307) Feb 20 04:14:23 localhost python3.9[151194]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:23 localhost python3.9[151269]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578862.8256218-516-151697500588615/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:24 localhost python3.9[151361]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:24 localhost python3.9[151436]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578863.9739811-561-146271435865849/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:25 localhost sshd[151510]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:14:25 localhost python3.9[151530]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:26 localhost python3.9[151605]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578865.1957633-606-4242343611661/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:27 localhost python3.9[151697]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:27 localhost python3.9[151789]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:14:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23851 DF PROTO=TCP SPT=59280 DPT=9100 SEQ=3999274782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7DD350000000001030307) Feb 20 04:14:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55097 DF PROTO=TCP SPT=58464 DPT=9105 SEQ=1776728432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7DE560000000001030307) Feb 20 04:14:28 localhost python3.9[151884]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:29 localhost python3.9[151976]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:14:29 localhost python3.9[152069]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:14:30 localhost python3.9[152193]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:14:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23853 DF PROTO=TCP SPT=59280 DPT=9100 SEQ=3999274782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7E94D0000000001030307) Feb 20 04:14:31 localhost podman[152315]: 2026-02-20 09:14:31.126189782 +0000 UTC m=+0.093128351 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, vcs-type=git) Feb 20 04:14:31 localhost podman[152315]: 2026-02-20 09:14:31.224859061 +0000 UTC m=+0.191797620 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.42.2, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , ceph=True, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 04:14:31 localhost python3.9[152379]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:32 localhost python3.9[152574]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:14:33 localhost python3.9[152685]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005625202.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:e8:77:41:0b" external_ids:ovn-encap-ip=172.19.0.106 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:14:33 localhost ovs-vsctl[152686]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005625202.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:e8:77:41:0b external_ids:ovn-encap-ip=172.19.0.106 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch Feb 20 04:14:34 localhost python3.9[152778]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:14:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23854 DF PROTO=TCP SPT=59280 DPT=9100 SEQ=3999274782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF7F90D0000000001030307) Feb 20 04:14:35 localhost python3.9[152871]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:14:36 localhost python3.9[152965]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:14:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24273 DF PROTO=TCP SPT=57900 DPT=9101 SEQ=1044554279 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8020D0000000001030307) Feb 20 04:14:37 localhost python3.9[153057]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:38 localhost python3.9[153105]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:14:38 localhost python3.9[153197]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:39 localhost python3.9[153245]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:14:39 localhost python3.9[153337]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31309 DF PROTO=TCP SPT=38674 DPT=9102 SEQ=2176331328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF80C8D0000000001030307) Feb 20 04:14:40 localhost python3.9[153429]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:40 localhost python3.9[153478]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:41 localhost python3.9[153570]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:41 localhost python3.9[153618]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:42 localhost sshd[153711]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:14:42 localhost python3.9[153710]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:14:42 localhost systemd[1]: Reloading. Feb 20 04:14:42 localhost systemd-rc-local-generator[153735]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:14:42 localhost systemd-sysv-generator[153741]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:14:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:14:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62683 DF PROTO=TCP SPT=53272 DPT=9102 SEQ=4041240428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8180D0000000001030307) Feb 20 04:14:44 localhost python3.9[153841]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:44 localhost python3.9[153889]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:45 localhost python3.9[153981]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:45 localhost systemd[1]: Starting dnf makecache... Feb 20 04:14:45 localhost python3.9[154029]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:45 localhost dnf[154030]: Updating Subscription Management repositories. Feb 20 04:14:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31311 DF PROTO=TCP SPT=38674 DPT=9102 SEQ=2176331328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8244D0000000001030307) Feb 20 04:14:47 localhost python3.9[154122]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:14:47 localhost systemd[1]: Reloading. Feb 20 04:14:47 localhost dnf[154030]: Metadata cache refreshed recently. Feb 20 04:14:47 localhost systemd-sysv-generator[154149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:14:47 localhost systemd-rc-local-generator[154145]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:14:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:14:47 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Feb 20 04:14:47 localhost systemd[1]: Finished dnf makecache. Feb 20 04:14:47 localhost systemd[1]: dnf-makecache.service: Consumed 2.156s CPU time. Feb 20 04:14:47 localhost systemd[1]: Starting Create netns directory... Feb 20 04:14:47 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 20 04:14:47 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 20 04:14:47 localhost systemd[1]: Finished Create netns directory. Feb 20 04:14:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39233 DF PROTO=TCP SPT=35128 DPT=9882 SEQ=553485484 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8308D0000000001030307) Feb 20 04:14:49 localhost python3.9[154257]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:14:49 localhost python3.9[154349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:49 localhost sshd[154350]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:14:50 localhost python3.9[154424]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578889.4568355-1341-79451637810517/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 20 04:14:51 localhost python3.9[154516]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:52 localhost python3.9[154608]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:14:52 localhost python3.9[154700]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:14:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39234 DF PROTO=TCP SPT=35128 DPT=9882 SEQ=553485484 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8404E0000000001030307) Feb 20 04:14:53 localhost python3.9[154775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578892.3759778-1440-212540457873445/.source.json _original_basename=.9kf57fuu follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:54 localhost python3.9[154865]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:14:56 localhost python3.9[155118]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False Feb 20 04:14:57 localhost python3.9[155210]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 20 04:14:57 localhost sshd[155211]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:14:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27232 DF PROTO=TCP SPT=57782 DPT=9100 SEQ=291586363 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF852630000000001030307) Feb 20 04:14:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10054 DF PROTO=TCP SPT=40946 DPT=9105 SEQ=3537555802 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF853810000000001030307) Feb 20 04:14:58 localhost python3[155304]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False Feb 20 04:14:59 localhost python3[155304]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e",#012 "Digest": "sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-01-30T06:38:56.623500445Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 346422728,#012 "VirtualSize": 346422728,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/5f30d5cd30916d88e24f21a5c8313738088a285d6d2d0efec09cc705e86eb786/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/ba6f0be74a40197166410c33403600ee466dbd9d2ddae7d7f49f78c9646720b2/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/ba6f0be74a40197166410c33403600ee466dbd9d2ddae7d7f49f78c9646720b2/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",#012 "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",#012 "sha256:033e0289d512b27a678c3feb7195acb9c5f2fbb27c9b2d8c8b5b5f6156f0d11f",#012 "sha256:f848a534c5dfe59c31c3da34c3d2466bdea7e8da7def4225acdd3ffef1544d2f"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2026-01-28T05:56:51.126388624Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:51.126459235Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260127\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:53.726938221Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-01-30T06:10:18.890429494Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890534417Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890553228Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890570688Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890616649Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890659121Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:19.232761948Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:52.670543613Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:55.650316471Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util- Feb 20 04:14:59 localhost podman[155355]: 2026-02-20 09:14:59.129359253 +0000 UTC m=+0.088827297 container remove 5f8758f21b5e6ca7b12bea640e402350935a37594135d68b5b1882d81bccd367 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, config_id=tripleo_step4, container_name=ovn_controller, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Feb 20 04:14:59 localhost python3[155304]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller Feb 20 04:14:59 localhost podman[155370]: Feb 20 04:14:59 localhost podman[155370]: 2026-02-20 09:14:59.233899815 +0000 UTC m=+0.086900529 container create 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 20 04:14:59 localhost podman[155370]: 2026-02-20 09:14:59.192328527 +0000 UTC m=+0.045329291 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Feb 20 04:14:59 localhost python3[155304]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Feb 20 04:14:59 localhost python3.9[155498]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:15:00 localhost python3.9[155592]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:15:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27234 DF PROTO=TCP SPT=57782 DPT=9100 SEQ=291586363 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF85E4D0000000001030307) Feb 20 04:15:01 localhost python3.9[155638]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:15:01 localhost python3.9[155729]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771578901.2036345-1674-211996840186472/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:15:02 localhost python3.9[155775]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 04:15:02 localhost systemd[1]: Reloading. Feb 20 04:15:02 localhost sshd[155777]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:15:02 localhost systemd-sysv-generator[155806]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:15:02 localhost systemd-rc-local-generator[155801]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:15:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:15:03 localhost python3.9[155859]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:15:03 localhost systemd[1]: Reloading. Feb 20 04:15:03 localhost systemd-sysv-generator[155892]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:15:03 localhost systemd-rc-local-generator[155884]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:15:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:15:03 localhost systemd[1]: Starting ovn_controller container... Feb 20 04:15:03 localhost systemd[1]: Started libcrun container. Feb 20 04:15:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1165a4e665fd26b9dba488d329482a300e4e6b23562832b41f7251610b375a49/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Feb 20 04:15:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:15:03 localhost podman[155901]: 2026-02-20 09:15:03.786060718 +0000 UTC m=+0.152078561 container init 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:15:03 localhost ovn_controller[155916]: + sudo -E kolla_set_configs Feb 20 04:15:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:15:03 localhost podman[155901]: 2026-02-20 09:15:03.835892737 +0000 UTC m=+0.201910580 container start 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:15:03 localhost edpm-start-podman-container[155901]: ovn_controller Feb 20 04:15:03 localhost systemd[1]: Created slice User Slice of UID 0. Feb 20 04:15:03 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Feb 20 04:15:03 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Feb 20 04:15:03 localhost systemd[1]: Starting User Manager for UID 0... Feb 20 04:15:03 localhost podman[155924]: 2026-02-20 09:15:03.9399066 +0000 UTC m=+0.096966094 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller) Feb 20 04:15:03 localhost podman[155924]: 2026-02-20 09:15:03.956777455 +0000 UTC m=+0.113836929 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:15:03 localhost podman[155924]: unhealthy Feb 20 04:15:03 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Main process exited, code=exited, status=1/FAILURE Feb 20 04:15:03 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Failed with result 'exit-code'. Feb 20 04:15:03 localhost systemd-journald[48906]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Feb 20 04:15:03 localhost systemd-journald[48906]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 20 04:15:03 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 04:15:03 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 04:15:04 localhost edpm-start-podman-container[155900]: Creating additional drop-in dependency for "ovn_controller" (76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383) Feb 20 04:15:04 localhost systemd[1]: Reloading. Feb 20 04:15:04 localhost systemd[155946]: Queued start job for default target Main User Target. Feb 20 04:15:04 localhost systemd[155946]: Created slice User Application Slice. Feb 20 04:15:04 localhost systemd[155946]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Feb 20 04:15:04 localhost systemd[155946]: Started Daily Cleanup of User's Temporary Directories. Feb 20 04:15:04 localhost systemd[155946]: Reached target Paths. Feb 20 04:15:04 localhost systemd[155946]: Reached target Timers. Feb 20 04:15:04 localhost systemd[155946]: Starting D-Bus User Message Bus Socket... Feb 20 04:15:04 localhost systemd[155946]: Starting Create User's Volatile Files and Directories... Feb 20 04:15:04 localhost systemd[155946]: Listening on D-Bus User Message Bus Socket. Feb 20 04:15:04 localhost systemd[155946]: Reached target Sockets. Feb 20 04:15:04 localhost systemd[155946]: Finished Create User's Volatile Files and Directories. Feb 20 04:15:04 localhost systemd[155946]: Reached target Basic System. Feb 20 04:15:04 localhost systemd[155946]: Reached target Main User Target. Feb 20 04:15:04 localhost systemd[155946]: Startup finished in 130ms. Feb 20 04:15:04 localhost systemd-rc-local-generator[156000]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:15:04 localhost systemd-sysv-generator[156004]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:15:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:15:04 localhost systemd[1]: tmp-crun.xTlNlM.mount: Deactivated successfully. Feb 20 04:15:04 localhost systemd[1]: Started User Manager for UID 0. Feb 20 04:15:04 localhost systemd[1]: Started ovn_controller container. Feb 20 04:15:04 localhost systemd[1]: Started Session c11 of User root. Feb 20 04:15:04 localhost ovn_controller[155916]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 20 04:15:04 localhost ovn_controller[155916]: INFO:__main__:Validating config file Feb 20 04:15:04 localhost ovn_controller[155916]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 20 04:15:04 localhost ovn_controller[155916]: INFO:__main__:Writing out command to execute Feb 20 04:15:04 localhost systemd[1]: session-c11.scope: Deactivated successfully. Feb 20 04:15:04 localhost ovn_controller[155916]: ++ cat /run_command Feb 20 04:15:04 localhost ovn_controller[155916]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Feb 20 04:15:04 localhost ovn_controller[155916]: + ARGS= Feb 20 04:15:04 localhost ovn_controller[155916]: + sudo kolla_copy_cacerts Feb 20 04:15:04 localhost systemd[1]: Started Session c12 of User root. Feb 20 04:15:04 localhost systemd[1]: session-c12.scope: Deactivated successfully. Feb 20 04:15:04 localhost ovn_controller[155916]: + [[ ! -n '' ]] Feb 20 04:15:04 localhost ovn_controller[155916]: + . kolla_extend_start Feb 20 04:15:04 localhost ovn_controller[155916]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\''' Feb 20 04:15:04 localhost ovn_controller[155916]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Feb 20 04:15:04 localhost ovn_controller[155916]: + umask 0022 Feb 20 04:15:04 localhost ovn_controller[155916]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8] Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00004|main|INFO|OVS IDL reconnected, force recompute. Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting... Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00011|features|INFO|OVS Feature: ct_flush, state: supported Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00013|main|INFO|OVS feature set changed, force recompute. Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms) Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute. Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00021|main|INFO|OVS feature set changed, force recompute. Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4 Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Feb 20 04:15:04 localhost ovn_controller[155916]: 2026-02-20T09:15:04Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Feb 20 04:15:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27235 DF PROTO=TCP SPT=57782 DPT=9100 SEQ=291586363 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF86E0D0000000001030307) Feb 20 04:15:05 localhost python3.9[156112]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 20 04:15:06 localhost python3.9[156204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:06 localhost python3.9[156277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578905.7722352-1809-103126307595247/.source.yaml _original_basename=.dh2ga75h follow=False checksum=035aea7be6ab20b22f84818c544954f904d1fea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:15:07 localhost python3.9[156369]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:15:07 localhost ovs-vsctl[156370]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload Feb 20 04:15:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32350 DF PROTO=TCP SPT=34922 DPT=9101 SEQ=2591065823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8780E0000000001030307) Feb 20 04:15:08 localhost python3.9[156462]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:15:08 localhost ovs-vsctl[156464]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids Feb 20 04:15:09 localhost python3.9[156557]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:15:09 localhost ovs-vsctl[156558]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options Feb 20 04:15:09 localhost systemd[1]: session-49.scope: Deactivated successfully. Feb 20 04:15:09 localhost systemd[1]: session-49.scope: Consumed 40.653s CPU time. Feb 20 04:15:09 localhost systemd-logind[760]: Session 49 logged out. Waiting for processes to exit. Feb 20 04:15:09 localhost systemd-logind[760]: Removed session 49. Feb 20 04:15:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6929 DF PROTO=TCP SPT=46924 DPT=9102 SEQ=1580161647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8818D0000000001030307) Feb 20 04:15:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29048 DF PROTO=TCP SPT=42960 DPT=9102 SEQ=2626228502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF88E0D0000000001030307) Feb 20 04:15:14 localhost systemd[1]: Stopping User Manager for UID 0... Feb 20 04:15:14 localhost systemd[155946]: Activating special unit Exit the Session... Feb 20 04:15:14 localhost systemd[155946]: Stopped target Main User Target. Feb 20 04:15:14 localhost systemd[155946]: Stopped target Basic System. Feb 20 04:15:14 localhost systemd[155946]: Stopped target Paths. Feb 20 04:15:14 localhost systemd[155946]: Stopped target Sockets. Feb 20 04:15:14 localhost systemd[155946]: Stopped target Timers. Feb 20 04:15:14 localhost systemd[155946]: Stopped Daily Cleanup of User's Temporary Directories. Feb 20 04:15:14 localhost systemd[155946]: Closed D-Bus User Message Bus Socket. Feb 20 04:15:14 localhost systemd[155946]: Stopped Create User's Volatile Files and Directories. Feb 20 04:15:14 localhost systemd[155946]: Removed slice User Application Slice. Feb 20 04:15:14 localhost systemd[155946]: Reached target Shutdown. Feb 20 04:15:14 localhost systemd[155946]: Finished Exit the Session. Feb 20 04:15:14 localhost systemd[155946]: Reached target Exit the Session. Feb 20 04:15:14 localhost systemd[1]: user@0.service: Deactivated successfully. Feb 20 04:15:14 localhost systemd[1]: Stopped User Manager for UID 0. Feb 20 04:15:14 localhost sshd[156574]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:15:14 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Feb 20 04:15:14 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Feb 20 04:15:14 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Feb 20 04:15:14 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Feb 20 04:15:14 localhost systemd[1]: Removed slice User Slice of UID 0. Feb 20 04:15:14 localhost systemd-logind[760]: New session 51 of user zuul. Feb 20 04:15:14 localhost systemd[1]: Started Session 51 of User zuul. Feb 20 04:15:15 localhost python3.9[156670]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:15:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6931 DF PROTO=TCP SPT=46924 DPT=9102 SEQ=1580161647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8994E0000000001030307) Feb 20 04:15:17 localhost python3.9[156766]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:17 localhost python3.9[156858]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:18 localhost python3.9[156950]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:18 localhost python3.9[157042]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12864 DF PROTO=TCP SPT=60474 DPT=9882 SEQ=3983659692 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8A5CD0000000001030307) Feb 20 04:15:19 localhost python3.9[157134]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:20 localhost python3.9[157224]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:15:20 localhost sshd[157284]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:15:20 localhost python3.9[157318]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Feb 20 04:15:21 localhost python3.9[157408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:22 localhost python3.9[157482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578921.304162-213-84203847233168/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12865 DF PROTO=TCP SPT=60474 DPT=9882 SEQ=3983659692 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8B58D0000000001030307) Feb 20 04:15:23 localhost python3.9[157572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:23 localhost python3.9[157645]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578922.9660816-258-189415973771239/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:24 localhost python3.9[157737]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:15:25 localhost python3.9[157791]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:15:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35358 DF PROTO=TCP SPT=50018 DPT=9100 SEQ=3768463212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8C7930000000001030307) Feb 20 04:15:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11504 DF PROTO=TCP SPT=41988 DPT=9105 SEQ=558530825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8C8B10000000001030307) Feb 20 04:15:30 localhost python3.9[157885]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 20 04:15:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35360 DF PROTO=TCP SPT=50018 DPT=9100 SEQ=3768463212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8D38E0000000001030307) Feb 20 04:15:32 localhost python3.9[157978]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:32 localhost python3.9[158049]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578931.6426413-369-226500530187244/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:33 localhost python3.9[158139]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:33 localhost python3.9[158254]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578932.6755912-369-141981272236499/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:15:34 localhost podman[158300]: 2026-02-20 09:15:34.474269896 +0000 UTC m=+0.096064712 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 20 04:15:34 localhost ovn_controller[155916]: 2026-02-20T09:15:34Z|00023|memory|INFO|13016 kB peak resident set size after 30.0 seconds Feb 20 04:15:34 localhost ovn_controller[155916]: 2026-02-20T09:15:34Z|00024|memory|INFO|idl-cells-OVN_Southbound:4072 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:9 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3 Feb 20 04:15:34 localhost podman[158300]: 2026-02-20 09:15:34.548783678 +0000 UTC m=+0.170578534 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 20 04:15:34 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:15:34 localhost python3.9[158403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:34 localhost sshd[158404]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:15:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35361 DF PROTO=TCP SPT=50018 DPT=9100 SEQ=3768463212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8E34D0000000001030307) Feb 20 04:15:35 localhost python3.9[158476]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578934.4138901-501-80686711286161/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:35 localhost python3.9[158566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:36 localhost python3.9[158637]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578935.5003104-501-152183692010307/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:36 localhost python3.9[158727]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:15:37 localhost python3.9[158821]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39950 DF PROTO=TCP SPT=36202 DPT=9101 SEQ=1197632007 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8EE0E0000000001030307) Feb 20 04:15:38 localhost python3.9[158913]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:38 localhost python3.9[158961]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:39 localhost python3.9[159053]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30707 DF PROTO=TCP SPT=45164 DPT=9102 SEQ=2295274256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF8F6CD0000000001030307) Feb 20 04:15:39 localhost python3.9[159101]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:40 localhost python3.9[159193]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:15:41 localhost python3.9[159285]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:41 localhost python3.9[159333]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:15:42 localhost python3.9[159425]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:42 localhost python3.9[159473]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:15:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35362 DF PROTO=TCP SPT=50018 DPT=9100 SEQ=3768463212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9040D0000000001030307) Feb 20 04:15:43 localhost python3.9[159565]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:15:43 localhost systemd[1]: Reloading. Feb 20 04:15:43 localhost systemd-rc-local-generator[159589]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:15:43 localhost systemd-sysv-generator[159595]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:15:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:15:45 localhost python3.9[159695]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:45 localhost python3.9[159743]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:15:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30709 DF PROTO=TCP SPT=45164 DPT=9102 SEQ=2295274256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF90E8E0000000001030307) Feb 20 04:15:46 localhost python3.9[159835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:46 localhost python3.9[159883]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:15:47 localhost python3.9[159975]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:15:47 localhost systemd[1]: Reloading. Feb 20 04:15:47 localhost systemd-rc-local-generator[160000]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:15:47 localhost systemd-sysv-generator[160003]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:15:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:15:48 localhost systemd[1]: Starting Create netns directory... Feb 20 04:15:48 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 20 04:15:48 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 20 04:15:48 localhost systemd[1]: Finished Create netns directory. Feb 20 04:15:48 localhost python3.9[160110]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27414 DF PROTO=TCP SPT=40794 DPT=9882 SEQ=2115636269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF91ACD0000000001030307) Feb 20 04:15:49 localhost python3.9[160202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:50 localhost python3.9[160275]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578949.1145105-954-98539256374449/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:51 localhost python3.9[160367]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:15:51 localhost python3.9[160459]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:15:52 localhost python3.9[160551]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:15:52 localhost python3.9[160626]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578951.8788655-1053-101890605612369/.source.json _original_basename=.57_lr_xl follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:15:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27415 DF PROTO=TCP SPT=40794 DPT=9882 SEQ=2115636269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF92A8E0000000001030307) Feb 20 04:15:53 localhost python3.9[160716]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:15:55 localhost python3.9[160969]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False Feb 20 04:15:57 localhost python3.9[161061]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 20 04:15:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13362 DF PROTO=TCP SPT=55830 DPT=9100 SEQ=1228360042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF93CC30000000001030307) Feb 20 04:15:58 localhost python3[161153]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False Feb 20 04:15:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3517 DF PROTO=TCP SPT=59186 DPT=9105 SEQ=1915057417 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF93DE20000000001030307) Feb 20 04:15:58 localhost sshd[161180]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:15:58 localhost python3[161153]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8",#012 "Digest": "sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-01-30T06:29:34.446261637Z",#012 "Config": {#012 "User": "neutron",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 785500417,#012 "VirtualSize": 785500417,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/4e4217686394af7a9122b2b81585c3ad5207fe018f230f20d139fff3e54ac3cc/diff:/var/lib/containers/storage/overlay/33f73751efe606c7233470249b676223e1b26b870cc49c3dbfbe2c7691e9f3fe/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",#012 "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",#012 "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",#012 "sha256:d3cc9cdab7e3e7c1a0a6c80e61bbd8cc5eeeba7069bab1cc064ed2e6cc28ed58",#012 "sha256:d5cbf3016eca6267717119e8ebab3c6c083cae6c589c6961ae23bfa93ef3afa4",#012 "sha256:0096ee5d07436ac5b94d9d58b8b2407cc5e6854d70de5e7f89b9a7a1ad4912ad"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "neutron",#012 "History": [#012 {#012 "created": "2026-01-28T05:56:51.126388624Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:51.126459235Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260127\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:53.726938221Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-01-30T06:10:18.890429494Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890534417Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890553228Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890570688Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890616649Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890659121Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:19.232761948Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:52.670543613Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.con Feb 20 04:15:58 localhost podman[161204]: 2026-02-20 09:15:58.505240645 +0000 UTC m=+0.097465627 container remove 34ad4500649147d0851b6b6d36c571712de3f566bc13ca154ee475b1602083c7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '85da22c155c014a1a90b143a817b4401'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:15:58 localhost python3[161153]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent Feb 20 04:15:58 localhost podman[161219]: Feb 20 04:15:58 localhost podman[161219]: 2026-02-20 09:15:58.611969479 +0000 UTC m=+0.088008994 container create eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 20 04:15:58 localhost podman[161219]: 2026-02-20 09:15:58.569052281 +0000 UTC m=+0.045091816 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 20 04:15:58 localhost python3[161153]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 20 04:15:59 localhost python3.9[161346]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:16:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13364 DF PROTO=TCP SPT=55830 DPT=9100 SEQ=1228360042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF948CD0000000001030307) Feb 20 04:16:01 localhost python3.9[161440]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:01 localhost python3.9[161486]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:16:02 localhost python3.9[161577]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771578961.6803987-1287-183472696028497/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:02 localhost python3.9[161623]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 04:16:02 localhost systemd[1]: Reloading. Feb 20 04:16:02 localhost systemd-sysv-generator[161650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:16:02 localhost systemd-rc-local-generator[161647]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:16:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:16:03 localhost python3.9[161705]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:16:03 localhost systemd[1]: Reloading. Feb 20 04:16:03 localhost systemd-rc-local-generator[161732]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:16:03 localhost systemd-sysv-generator[161735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:16:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:16:03 localhost systemd[1]: Starting ovn_metadata_agent container... Feb 20 04:16:04 localhost systemd[1]: Started libcrun container. Feb 20 04:16:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6785cbbdb2bb45c979e31333253bfd1a6cb494a8652c64f1fef64e464eddb20/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 20 04:16:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6785cbbdb2bb45c979e31333253bfd1a6cb494a8652c64f1fef64e464eddb20/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:16:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:16:04 localhost podman[161747]: 2026-02-20 09:16:04.159598594 +0000 UTC m=+0.141567801 container init eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: + sudo -E kolla_set_configs Feb 20 04:16:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:16:04 localhost podman[161747]: 2026-02-20 09:16:04.192872799 +0000 UTC m=+0.174841996 container start eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 20 04:16:04 localhost edpm-start-podman-container[161747]: ovn_metadata_agent Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Validating config file Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Copying service configuration files Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Writing out command to execute Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Setting permission for /var/lib/neutron Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: ++ cat /run_command Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: + CMD=neutron-ovn-metadata-agent Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: + ARGS= Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: + sudo kolla_copy_cacerts Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: + [[ ! -n '' ]] Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: + . kolla_extend_start Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: Running command: 'neutron-ovn-metadata-agent' Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\''' Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: + umask 0022 Feb 20 04:16:04 localhost ovn_metadata_agent[161761]: + exec neutron-ovn-metadata-agent Feb 20 04:16:04 localhost podman[161769]: 2026-02-20 09:16:04.28661221 +0000 UTC m=+0.089710077 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:16:04 localhost podman[161769]: 2026-02-20 09:16:04.367051828 +0000 UTC m=+0.170149685 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 20 04:16:04 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:16:04 localhost edpm-start-podman-container[161746]: Creating additional drop-in dependency for "ovn_metadata_agent" (eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef) Feb 20 04:16:04 localhost systemd[1]: Reloading. Feb 20 04:16:04 localhost systemd-rc-local-generator[161833]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:16:04 localhost systemd-sysv-generator[161838]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:16:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:16:04 localhost systemd[1]: Started ovn_metadata_agent container. Feb 20 04:16:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:16:04 localhost podman[161849]: 2026-02-20 09:16:04.797533283 +0000 UTC m=+0.079062966 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 20 04:16:04 localhost podman[161849]: 2026-02-20 09:16:04.837739102 +0000 UTC m=+0.119268775 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 20 04:16:04 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:16:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13365 DF PROTO=TCP SPT=55830 DPT=9100 SEQ=1228360042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9588D0000000001030307) Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.830 161766 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.830 161766 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.830 161766 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.831 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.831 161766 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.831 161766 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.831 161766 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.831 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.831 161766 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.832 161766 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.832 161766 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.832 161766 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.832 161766 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.832 161766 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.832 161766 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.832 161766 DEBUG neutron.agent.ovn.metadata_agent [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.833 161766 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.833 161766 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.833 161766 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.833 161766 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.833 161766 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.833 161766 DEBUG neutron.agent.ovn.metadata_agent [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.833 161766 DEBUG neutron.agent.ovn.metadata_agent [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.834 161766 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.834 161766 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.834 161766 DEBUG neutron.agent.ovn.metadata_agent [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.834 161766 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.834 161766 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.834 161766 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.835 161766 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.835 161766 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.835 161766 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.835 161766 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.835 161766 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.835 161766 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.835 161766 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.836 161766 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.836 161766 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.836 161766 DEBUG neutron.agent.ovn.metadata_agent [-] host = np0005625202.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.836 161766 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.836 161766 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.836 161766 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.836 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.837 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.837 161766 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.837 161766 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.837 161766 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.837 161766 DEBUG neutron.agent.ovn.metadata_agent [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.837 161766 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.837 161766 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.838 161766 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.838 161766 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.838 161766 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.838 161766 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.838 161766 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.838 161766 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.838 161766 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.839 161766 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.839 161766 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.839 161766 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.839 161766 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.839 161766 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.839 161766 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.839 161766 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.840 161766 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.840 161766 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.840 161766 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.840 161766 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.840 161766 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.840 161766 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.841 161766 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.841 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.841 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.841 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.841 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.841 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.842 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.842 161766 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.842 161766 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.842 161766 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.842 161766 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.842 161766 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.842 161766 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.843 161766 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.843 161766 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.843 161766 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.843 161766 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.843 161766 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.843 161766 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.843 161766 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.843 161766 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.844 161766 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.844 161766 DEBUG neutron.agent.ovn.metadata_agent [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.844 161766 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.844 161766 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.844 161766 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.844 161766 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.844 161766 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.845 161766 DEBUG neutron.agent.ovn.metadata_agent [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.845 161766 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.845 161766 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.845 161766 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.845 161766 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.845 161766 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.845 161766 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.846 161766 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.846 161766 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.846 161766 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.846 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.846 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.846 161766 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.846 161766 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.847 161766 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.847 161766 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.847 161766 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.847 161766 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.847 161766 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.847 161766 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.847 161766 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.848 161766 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.848 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.848 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.848 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.848 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.848 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.849 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.849 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.849 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.849 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.849 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.849 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.849 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.850 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.850 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.850 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.850 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.850 161766 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.850 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.850 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.851 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.851 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.851 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.851 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.851 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.851 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.851 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.852 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.852 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.852 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.852 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.852 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.852 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.852 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.853 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.853 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.853 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.853 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.853 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.853 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.853 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.854 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.854 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.854 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.854 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.854 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.854 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.854 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.855 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.855 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.855 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.855 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.855 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.855 161766 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.855 161766 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.856 161766 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.856 161766 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.856 161766 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.856 161766 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.856 161766 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.856 161766 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.856 161766 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.857 161766 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.857 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.857 161766 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.857 161766 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.857 161766 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.857 161766 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.857 161766 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.858 161766 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.858 161766 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.858 161766 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.858 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.858 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.858 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.859 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.859 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.859 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.859 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.859 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.859 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.860 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.860 161766 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.860 161766 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.860 161766 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.860 161766 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.860 161766 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.860 161766 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.861 161766 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.861 161766 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.861 161766 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.861 161766 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.861 161766 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.861 161766 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.861 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.862 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.862 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.862 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.862 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.862 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.862 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.862 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.863 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.863 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.863 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.863 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.863 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.863 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.863 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.864 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.864 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.864 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.864 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.864 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.864 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.864 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.864 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.865 161766 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.865 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.865 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.865 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.865 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.865 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.866 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.866 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.866 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.866 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.866 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.866 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.866 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.867 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.867 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.867 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.867 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.867 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.867 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.867 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.868 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.868 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.868 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.868 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.868 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.868 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.868 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.869 161766 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.869 161766 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.869 161766 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.869 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.869 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.869 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.869 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.870 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.870 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.870 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.870 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.870 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.870 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.871 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.871 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.871 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.871 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.871 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.871 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.871 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.872 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.872 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.872 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.872 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.872 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.872 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.872 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.873 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.873 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.873 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.873 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.873 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.873 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.873 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.874 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.874 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.874 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.874 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.874 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.874 161766 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.874 161766 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.913 161766 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.914 161766 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.914 161766 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.914 161766 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.914 161766 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.933 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 0a83b6be-9fe2-42ef-8768-88847d97b165 (UUID: 0a83b6be-9fe2-42ef-8768-88847d97b165) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.955 161766 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.956 161766 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.956 161766 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.956 161766 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.958 161766 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.960 161766 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.967 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '0a83b6be-9fe2-42ef-8768-88847d97b165'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[], external_ids={'neutron:ovn-metadata-id': '78d91fa9-3583-50ec-9c41-2140e4151d91', 'neutron:ovn-metadata-sb-cfg': '1'}, name=0a83b6be-9fe2-42ef-8768-88847d97b165, nb_cfg_timestamp=1771578913269, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.968 161766 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.968 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.969 161766 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.969 161766 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.969 161766 INFO oslo_service.service [-] Starting 1 workers#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.972 161766 DEBUG oslo_service.service [-] Started child 161888 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.974 161766 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp1rh4nmjj/privsep.sock']#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.977 161888 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-457370'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.998 161888 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.998 161888 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Feb 20 04:16:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:05.998 161888 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 20 04:16:06 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:06.002 161888 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Feb 20 04:16:06 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:06.003 161888 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Feb 20 04:16:06 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:06.009 161888 INFO eventlet.wsgi.server [-] (161888) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m Feb 20 04:16:06 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:06.589 161766 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Feb 20 04:16:06 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:06.590 161766 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1rh4nmjj/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Feb 20 04:16:06 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:06.474 161893 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 20 04:16:06 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:06.479 161893 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 20 04:16:06 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:06.483 161893 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Feb 20 04:16:06 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:06.483 161893 INFO oslo.privsep.daemon [-] privsep daemon running as pid 161893#033[00m Feb 20 04:16:06 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:06.594 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[338457ca-2c22-499e-9689-c8a3e7d836f6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.028 161893 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.028 161893 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.028 161893 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:16:07 localhost python3.9[161973]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 20 04:16:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5041 DF PROTO=TCP SPT=56568 DPT=9101 SEQ=3847788382 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9620E0000000001030307) Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.462 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[33b818ad-ab28-45e2-b9cf-dd43253be91e]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.465 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, column=external_ids, values=({'neutron:ovn-metadata-id': '78d91fa9-3583-50ec-9c41-2140e4151d91'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.466 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.467 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.726 161766 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.726 161766 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.726 161766 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.727 161766 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.728 161766 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.728 161766 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.728 161766 DEBUG oslo_service.service [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.728 161766 DEBUG oslo_service.service [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.729 161766 DEBUG oslo_service.service [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.729 161766 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.729 161766 DEBUG oslo_service.service [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.729 161766 DEBUG oslo_service.service [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.730 161766 DEBUG oslo_service.service [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.730 161766 DEBUG oslo_service.service [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.730 161766 DEBUG oslo_service.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.730 161766 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.731 161766 DEBUG oslo_service.service [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.731 161766 DEBUG oslo_service.service [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.731 161766 DEBUG oslo_service.service [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.731 161766 DEBUG oslo_service.service [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.731 161766 DEBUG oslo_service.service [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.732 161766 DEBUG oslo_service.service [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.732 161766 DEBUG oslo_service.service [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.732 161766 DEBUG oslo_service.service [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.732 161766 DEBUG oslo_service.service [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.733 161766 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.733 161766 DEBUG oslo_service.service [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.733 161766 DEBUG oslo_service.service [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.733 161766 DEBUG oslo_service.service [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.733 161766 DEBUG oslo_service.service [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.734 161766 DEBUG oslo_service.service [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.734 161766 DEBUG oslo_service.service [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.734 161766 DEBUG oslo_service.service [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.734 161766 DEBUG oslo_service.service [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.735 161766 DEBUG oslo_service.service [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.735 161766 DEBUG oslo_service.service [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.735 161766 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.736 161766 DEBUG oslo_service.service [-] host = np0005625202.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.736 161766 DEBUG oslo_service.service [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.736 161766 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.736 161766 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.737 161766 DEBUG oslo_service.service [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.737 161766 DEBUG oslo_service.service [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.737 161766 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.737 161766 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.738 161766 DEBUG oslo_service.service [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.738 161766 DEBUG oslo_service.service [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.738 161766 DEBUG oslo_service.service [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.738 161766 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.738 161766 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.739 161766 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.739 161766 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.739 161766 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.739 161766 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.739 161766 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.740 161766 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.740 161766 DEBUG oslo_service.service [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.740 161766 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.740 161766 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.740 161766 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.741 161766 DEBUG oslo_service.service [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.741 161766 DEBUG oslo_service.service [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.741 161766 DEBUG oslo_service.service [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.741 161766 DEBUG oslo_service.service [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.742 161766 DEBUG oslo_service.service [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.742 161766 DEBUG oslo_service.service [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.742 161766 DEBUG oslo_service.service [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.742 161766 DEBUG oslo_service.service [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.742 161766 DEBUG oslo_service.service [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.743 161766 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.743 161766 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.743 161766 DEBUG oslo_service.service [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.743 161766 DEBUG oslo_service.service [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.743 161766 DEBUG oslo_service.service [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.744 161766 DEBUG oslo_service.service [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.744 161766 DEBUG oslo_service.service [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.744 161766 DEBUG oslo_service.service [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.745 161766 DEBUG oslo_service.service [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.745 161766 DEBUG oslo_service.service [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.745 161766 DEBUG oslo_service.service [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.745 161766 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.745 161766 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.746 161766 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.746 161766 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.746 161766 DEBUG oslo_service.service [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.746 161766 DEBUG oslo_service.service [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.747 161766 DEBUG oslo_service.service [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.747 161766 DEBUG oslo_service.service [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.747 161766 DEBUG oslo_service.service [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.747 161766 DEBUG oslo_service.service [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.748 161766 DEBUG oslo_service.service [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.748 161766 DEBUG oslo_service.service [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.748 161766 DEBUG oslo_service.service [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.748 161766 DEBUG oslo_service.service [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.748 161766 DEBUG oslo_service.service [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.749 161766 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.749 161766 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.749 161766 DEBUG oslo_service.service [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.749 161766 DEBUG oslo_service.service [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.750 161766 DEBUG oslo_service.service [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.750 161766 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.750 161766 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.750 161766 DEBUG oslo_service.service [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.750 161766 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.751 161766 DEBUG oslo_service.service [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.751 161766 DEBUG oslo_service.service [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.751 161766 DEBUG oslo_service.service [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.751 161766 DEBUG oslo_service.service [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.752 161766 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.752 161766 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.752 161766 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.752 161766 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.753 161766 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.753 161766 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.753 161766 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.753 161766 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.754 161766 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.754 161766 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.754 161766 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.754 161766 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.754 161766 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.755 161766 DEBUG oslo_service.service [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.755 161766 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.755 161766 DEBUG oslo_service.service [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.755 161766 DEBUG oslo_service.service [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.756 161766 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.756 161766 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.756 161766 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.756 161766 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.756 161766 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.757 161766 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.757 161766 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.757 161766 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.757 161766 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.758 161766 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.758 161766 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.758 161766 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.758 161766 DEBUG oslo_service.service [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.758 161766 DEBUG oslo_service.service [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.759 161766 DEBUG oslo_service.service [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.759 161766 DEBUG oslo_service.service [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.759 161766 DEBUG oslo_service.service [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.759 161766 DEBUG oslo_service.service [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.760 161766 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.760 161766 DEBUG oslo_service.service [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.760 161766 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.760 161766 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.761 161766 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.761 161766 DEBUG oslo_service.service [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.761 161766 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.761 161766 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.762 161766 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.762 161766 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.762 161766 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.762 161766 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.762 161766 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.763 161766 DEBUG oslo_service.service [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.763 161766 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.763 161766 DEBUG oslo_service.service [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.763 161766 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.763 161766 DEBUG oslo_service.service [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.764 161766 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.764 161766 DEBUG oslo_service.service [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.764 161766 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.764 161766 DEBUG oslo_service.service [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.764 161766 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.765 161766 DEBUG oslo_service.service [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.765 161766 DEBUG oslo_service.service [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.765 161766 DEBUG oslo_service.service [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.765 161766 DEBUG oslo_service.service [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.765 161766 DEBUG oslo_service.service [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.766 161766 DEBUG oslo_service.service [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.766 161766 DEBUG oslo_service.service [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.766 161766 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.766 161766 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.767 161766 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.767 161766 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.767 161766 DEBUG oslo_service.service [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.767 161766 DEBUG oslo_service.service [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.768 161766 DEBUG oslo_service.service [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.768 161766 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.768 161766 DEBUG oslo_service.service [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.768 161766 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.768 161766 DEBUG oslo_service.service [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.769 161766 DEBUG oslo_service.service [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.769 161766 DEBUG oslo_service.service [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.769 161766 DEBUG oslo_service.service [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.769 161766 DEBUG oslo_service.service [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.769 161766 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.770 161766 DEBUG oslo_service.service [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.770 161766 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.770 161766 DEBUG oslo_service.service [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.770 161766 DEBUG oslo_service.service [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.770 161766 DEBUG oslo_service.service [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.771 161766 DEBUG oslo_service.service [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.771 161766 DEBUG oslo_service.service [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.771 161766 DEBUG oslo_service.service [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.771 161766 DEBUG oslo_service.service [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.772 161766 DEBUG oslo_service.service [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.772 161766 DEBUG oslo_service.service [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.772 161766 DEBUG oslo_service.service [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.772 161766 DEBUG oslo_service.service [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.772 161766 DEBUG oslo_service.service [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.773 161766 DEBUG oslo_service.service [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.773 161766 DEBUG oslo_service.service [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.773 161766 DEBUG oslo_service.service [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.773 161766 DEBUG oslo_service.service [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.774 161766 DEBUG oslo_service.service [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.774 161766 DEBUG oslo_service.service [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.774 161766 DEBUG oslo_service.service [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.774 161766 DEBUG oslo_service.service [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.774 161766 DEBUG oslo_service.service [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.775 161766 DEBUG oslo_service.service [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.775 161766 DEBUG oslo_service.service [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.775 161766 DEBUG oslo_service.service [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.775 161766 DEBUG oslo_service.service [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.776 161766 DEBUG oslo_service.service [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.776 161766 DEBUG oslo_service.service [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.776 161766 DEBUG oslo_service.service [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.776 161766 DEBUG oslo_service.service [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.777 161766 DEBUG oslo_service.service [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.777 161766 DEBUG oslo_service.service [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.777 161766 DEBUG oslo_service.service [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.777 161766 DEBUG oslo_service.service [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.777 161766 DEBUG oslo_service.service [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.777 161766 DEBUG oslo_service.service [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.778 161766 DEBUG oslo_service.service [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.778 161766 DEBUG oslo_service.service [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.778 161766 DEBUG oslo_service.service [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.778 161766 DEBUG oslo_service.service [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.778 161766 DEBUG oslo_service.service [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.778 161766 DEBUG oslo_service.service [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.778 161766 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.778 161766 DEBUG oslo_service.service [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.779 161766 DEBUG oslo_service.service [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.779 161766 DEBUG oslo_service.service [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.779 161766 DEBUG oslo_service.service [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.779 161766 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.779 161766 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.779 161766 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.779 161766 DEBUG oslo_service.service [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.780 161766 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.780 161766 DEBUG oslo_service.service [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.780 161766 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.780 161766 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.780 161766 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.780 161766 DEBUG oslo_service.service [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.780 161766 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.780 161766 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.781 161766 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.781 161766 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.781 161766 DEBUG oslo_service.service [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.781 161766 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.781 161766 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.781 161766 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.781 161766 DEBUG oslo_service.service [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.782 161766 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.782 161766 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.782 161766 DEBUG oslo_service.service [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.782 161766 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.782 161766 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.782 161766 DEBUG oslo_service.service [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.782 161766 DEBUG oslo_service.service [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.782 161766 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.783 161766 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.783 161766 DEBUG oslo_service.service [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.783 161766 DEBUG oslo_service.service [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.783 161766 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.783 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.783 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.783 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.784 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.784 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.784 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.784 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.784 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.784 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.784 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.785 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.785 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.785 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.785 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.785 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.785 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.785 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.786 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.786 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.786 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.786 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.786 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.786 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.786 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.787 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.787 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.787 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.787 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.787 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.787 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.787 161766 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.788 161766 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.788 161766 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.788 161766 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.788 161766 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:16:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:16:07.788 161766 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 20 04:16:08 localhost python3.9[162065]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:16:08 localhost sshd[162109]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:16:08 localhost python3.9[162141]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578967.9804757-1422-122625418479629/.source.yaml _original_basename=.eh67hmgg follow=False checksum=00f5f1349c1b2f1d82b680e3efe9b7b384555dee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:09 localhost systemd-logind[760]: Session 51 logged out. Waiting for processes to exit. Feb 20 04:16:09 localhost systemd[1]: session-51.scope: Deactivated successfully. Feb 20 04:16:09 localhost systemd[1]: session-51.scope: Consumed 32.029s CPU time. Feb 20 04:16:09 localhost systemd-logind[760]: Removed session 51. Feb 20 04:16:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39951 DF PROTO=TCP SPT=36202 DPT=9101 SEQ=1197632007 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF96C0E0000000001030307) Feb 20 04:16:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13366 DF PROTO=TCP SPT=55830 DPT=9100 SEQ=1228360042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9780D0000000001030307) Feb 20 04:16:14 localhost sshd[162158]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:16:14 localhost systemd-logind[760]: New session 52 of user zuul. Feb 20 04:16:14 localhost systemd[1]: Started Session 52 of User zuul. Feb 20 04:16:15 localhost python3.9[162251]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:16:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54121 DF PROTO=TCP SPT=44154 DPT=9102 SEQ=3897088255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF983CD0000000001030307) Feb 20 04:16:16 localhost sshd[162348]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:16:16 localhost python3.9[162347]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:16:17 localhost sshd[162436]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:16:17 localhost python3.9[162456]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:16:17 localhost systemd[1]: libpod-7dd5f4be3e5e2569a3059350d7d863334f2cc9c53a21705eac4bd6b527e94424.scope: Deactivated successfully. Feb 20 04:16:17 localhost podman[162457]: 2026-02-20 09:16:17.521977588 +0000 UTC m=+0.071555269 container died 7dd5f4be3e5e2569a3059350d7d863334f2cc9c53a21705eac4bd6b527e94424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, architecture=x86_64, tcib_managed=true) Feb 20 04:16:17 localhost systemd[1]: tmp-crun.HKEwJZ.mount: Deactivated successfully. Feb 20 04:16:17 localhost podman[162457]: 2026-02-20 09:16:17.568035104 +0000 UTC m=+0.117612735 container cleanup 7dd5f4be3e5e2569a3059350d7d863334f2cc9c53a21705eac4bd6b527e94424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, name=rhosp-rhel9/openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public) Feb 20 04:16:17 localhost podman[162472]: 2026-02-20 09:16:17.645635645 +0000 UTC m=+0.119346797 container remove 7dd5f4be3e5e2569a3059350d7d863334f2cc9c53a21705eac4bd6b527e94424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20260112.1, release=1766032510, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt) Feb 20 04:16:17 localhost systemd[1]: libpod-conmon-7dd5f4be3e5e2569a3059350d7d863334f2cc9c53a21705eac4bd6b527e94424.scope: Deactivated successfully. Feb 20 04:16:18 localhost systemd[1]: tmp-crun.GaLsuG.mount: Deactivated successfully. Feb 20 04:16:18 localhost systemd[1]: var-lib-containers-storage-overlay-8a6cde03305078609256ba73229e013215e0b6b3bab4afbfd99df235fae8cd56-merged.mount: Deactivated successfully. Feb 20 04:16:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7dd5f4be3e5e2569a3059350d7d863334f2cc9c53a21705eac4bd6b527e94424-userdata-shm.mount: Deactivated successfully. Feb 20 04:16:18 localhost python3.9[162577]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 04:16:18 localhost systemd[1]: Reloading. Feb 20 04:16:19 localhost systemd-rc-local-generator[162598]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:16:19 localhost systemd-sysv-generator[162603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:16:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:16:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7118 DF PROTO=TCP SPT=54088 DPT=9882 SEQ=3880516183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9900D0000000001030307) Feb 20 04:16:20 localhost python3.9[162703]: ansible-ansible.builtin.service_facts Invoked Feb 20 04:16:20 localhost network[162720]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 04:16:20 localhost network[162721]: 'network-scripts' will be removed from distribution in near future. Feb 20 04:16:20 localhost network[162722]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 04:16:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:16:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7119 DF PROTO=TCP SPT=54088 DPT=9882 SEQ=3880516183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF99FCE0000000001030307) Feb 20 04:16:24 localhost python3.9[162924]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:16:24 localhost systemd[1]: Reloading. Feb 20 04:16:24 localhost systemd-rc-local-generator[162947]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:16:24 localhost systemd-sysv-generator[162952]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:16:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:16:25 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target. Feb 20 04:16:25 localhost python3.9[163055]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:16:26 localhost python3.9[163148]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:16:27 localhost python3.9[163241]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:16:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36412 DF PROTO=TCP SPT=51802 DPT=9100 SEQ=3402227050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9B1F30000000001030307) Feb 20 04:16:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61562 DF PROTO=TCP SPT=39308 DPT=9105 SEQ=1370564576 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9B3120000000001030307) Feb 20 04:16:29 localhost python3.9[163334]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:16:29 localhost python3.9[163427]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:16:30 localhost python3.9[163520]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:16:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36414 DF PROTO=TCP SPT=51802 DPT=9100 SEQ=3402227050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9BE0E0000000001030307) Feb 20 04:16:32 localhost python3.9[163613]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:33 localhost python3.9[163705]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:33 localhost python3.9[163797]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:34 localhost python3.9[163889]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:16:34 localhost podman[163891]: 2026-02-20 09:16:34.521190459 +0000 UTC m=+0.096609321 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:16:34 localhost podman[163891]: 2026-02-20 09:16:34.531681427 +0000 UTC m=+0.107100279 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 20 04:16:34 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:16:34 localhost python3.9[164029]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36415 DF PROTO=TCP SPT=51802 DPT=9100 SEQ=3402227050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9CDCE0000000001030307) Feb 20 04:16:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:16:35 localhost podman[164156]: 2026-02-20 09:16:35.445925215 +0000 UTC m=+0.082095888 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 20 04:16:35 localhost podman[164156]: 2026-02-20 09:16:35.48767314 +0000 UTC m=+0.123843813 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:16:35 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:16:35 localhost python3.9[164155]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:36 localhost python3.9[164270]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:36 localhost python3.9[164377]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:37 localhost python3.9[164469]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43434 DF PROTO=TCP SPT=44084 DPT=9101 SEQ=2867900621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9D80E0000000001030307) Feb 20 04:16:37 localhost python3.9[164561]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:38 localhost python3.9[164653]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:39 localhost python3.9[164745]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:39 localhost python3.9[164837]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7699 DF PROTO=TCP SPT=34364 DPT=9102 SEQ=1241700319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9E14E0000000001030307) Feb 20 04:16:40 localhost python3.9[164929]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:16:41 localhost python3.9[165021]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:16:41 localhost python3.9[165113]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 20 04:16:42 localhost python3.9[165205]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 04:16:42 localhost systemd[1]: Reloading. Feb 20 04:16:42 localhost systemd-sysv-generator[165233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:16:42 localhost systemd-rc-local-generator[165228]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:16:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:16:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36416 DF PROTO=TCP SPT=51802 DPT=9100 SEQ=3402227050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9EE0D0000000001030307) Feb 20 04:16:43 localhost python3.9[165333]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:16:44 localhost python3.9[165426]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:16:45 localhost python3.9[165519]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:16:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7701 DF PROTO=TCP SPT=34364 DPT=9102 SEQ=1241700319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AF9F90E0000000001030307) Feb 20 04:16:46 localhost python3.9[165612]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:16:47 localhost python3.9[165705]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:16:47 localhost python3.9[165798]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:16:48 localhost python3.9[165891]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:16:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62063 DF PROTO=TCP SPT=49022 DPT=9882 SEQ=3613501703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA054D0000000001030307) Feb 20 04:16:49 localhost python3.9[165984]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None Feb 20 04:16:50 localhost python3.9[166077]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Feb 20 04:16:51 localhost python3.9[166175]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005625202.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Feb 20 04:16:52 localhost python3.9[166275]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:16:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62064 DF PROTO=TCP SPT=49022 DPT=9882 SEQ=3613501703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA150D0000000001030307) Feb 20 04:16:53 localhost python3.9[166329]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:16:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33730 DF PROTO=TCP SPT=41114 DPT=9100 SEQ=1627329106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA27230000000001030307) Feb 20 04:16:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37604 DF PROTO=TCP SPT=52126 DPT=9105 SEQ=1684455652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA28420000000001030307) Feb 20 04:17:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33732 DF PROTO=TCP SPT=41114 DPT=9100 SEQ=1627329106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA330D0000000001030307) Feb 20 04:17:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33733 DF PROTO=TCP SPT=41114 DPT=9100 SEQ=1627329106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA42CD0000000001030307) Feb 20 04:17:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:17:05 localhost systemd[1]: tmp-crun.yHH3jS.mount: Deactivated successfully. Feb 20 04:17:05 localhost podman[166400]: 2026-02-20 09:17:05.451136613 +0000 UTC m=+0.088995617 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:17:05 localhost podman[166400]: 2026-02-20 09:17:05.485748537 +0000 UTC m=+0.123607581 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Feb 20 04:17:05 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:17:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:17:05 localhost podman[166419]: 2026-02-20 09:17:05.598817422 +0000 UTC m=+0.074323464 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:17:05 localhost podman[166419]: 2026-02-20 09:17:05.666775519 +0000 UTC m=+0.142281521 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:17:05 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:17:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:17:05.876 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:17:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:17:05.877 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:17:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:17:05.877 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:17:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55534 DF PROTO=TCP SPT=53690 DPT=9101 SEQ=1721766209 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA4C0D0000000001030307) Feb 20 04:17:07 localhost sshd[166446]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:17:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43435 DF PROTO=TCP SPT=44084 DPT=9101 SEQ=2867900621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA560D0000000001030307) Feb 20 04:17:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54124 DF PROTO=TCP SPT=44154 DPT=9102 SEQ=3897088255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA620D0000000001030307) Feb 20 04:17:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11556 DF PROTO=TCP SPT=39616 DPT=9102 SEQ=2056705486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA6E0D0000000001030307) Feb 20 04:17:18 localhost kernel: SELinux: Converting 2746 SID table entries... Feb 20 04:17:18 localhost kernel: SELinux: Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped). Feb 20 04:17:18 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 04:17:18 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 04:17:18 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 04:17:18 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 04:17:18 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 04:17:18 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 04:17:18 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 04:17:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62018 DF PROTO=TCP SPT=32784 DPT=9882 SEQ=3372719062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA7A8E0000000001030307) Feb 20 04:17:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62019 DF PROTO=TCP SPT=32784 DPT=9882 SEQ=3372719062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA8A4D0000000001030307) Feb 20 04:17:24 localhost sshd[167490]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:17:25 localhost sshd[167492]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:17:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=572 DF PROTO=TCP SPT=59154 DPT=9100 SEQ=1340795657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA9C580000000001030307) Feb 20 04:17:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19851 DF PROTO=TCP SPT=39314 DPT=9105 SEQ=1610346091 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFA9D710000000001030307) Feb 20 04:17:29 localhost kernel: SELinux: Converting 2749 SID table entries... Feb 20 04:17:29 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 04:17:29 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 04:17:29 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 04:17:29 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 04:17:29 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 04:17:29 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 04:17:29 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 04:17:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=574 DF PROTO=TCP SPT=59154 DPT=9100 SEQ=1340795657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFAA84D0000000001030307) Feb 20 04:17:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=575 DF PROTO=TCP SPT=59154 DPT=9100 SEQ=1340795657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFAB80D0000000001030307) Feb 20 04:17:36 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=20 res=1 Feb 20 04:17:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:17:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:17:36 localhost podman[167525]: 2026-02-20 09:17:36.375133864 +0000 UTC m=+0.109136361 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 04:17:36 localhost podman[167525]: 2026-02-20 09:17:36.406641127 +0000 UTC m=+0.140643624 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:17:36 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:17:36 localhost podman[167524]: 2026-02-20 09:17:36.497144517 +0000 UTC m=+0.232618178 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2) Feb 20 04:17:36 localhost podman[167524]: 2026-02-20 09:17:36.561916331 +0000 UTC m=+0.297390022 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0) Feb 20 04:17:36 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:17:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48431 DF PROTO=TCP SPT=53276 DPT=9101 SEQ=2462339151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFAC20D0000000001030307) Feb 20 04:17:38 localhost kernel: SELinux: Converting 2752 SID table entries... Feb 20 04:17:38 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 04:17:38 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 04:17:38 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 04:17:38 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 04:17:38 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 04:17:38 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 04:17:38 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 04:17:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5 DF PROTO=TCP SPT=58756 DPT=9102 SEQ=352330510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFACB8D0000000001030307) Feb 20 04:17:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7704 DF PROTO=TCP SPT=34364 DPT=9102 SEQ=1241700319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFAD80D0000000001030307) Feb 20 04:17:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7 DF PROTO=TCP SPT=58756 DPT=9102 SEQ=352330510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFAE34D0000000001030307) Feb 20 04:17:46 localhost kernel: SELinux: Converting 2752 SID table entries... Feb 20 04:17:46 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 04:17:46 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 04:17:46 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 04:17:46 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 04:17:46 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 04:17:46 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 04:17:46 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 04:17:47 localhost systemd[1]: Reloading. Feb 20 04:17:47 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=22 res=1 Feb 20 04:17:47 localhost systemd-rc-local-generator[167673]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:17:47 localhost systemd-sysv-generator[167680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:17:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:17:47 localhost systemd[1]: Reloading. Feb 20 04:17:47 localhost systemd-rc-local-generator[167712]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:17:47 localhost systemd-sysv-generator[167717]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:17:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:17:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54442 DF PROTO=TCP SPT=46530 DPT=9882 SEQ=4035878906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFAEF8D0000000001030307) Feb 20 04:17:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54443 DF PROTO=TCP SPT=46530 DPT=9882 SEQ=4035878906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFAFF4D0000000001030307) Feb 20 04:17:56 localhost kernel: SELinux: Converting 2753 SID table entries... Feb 20 04:17:56 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 20 04:17:56 localhost kernel: SELinux: policy capability open_perms=1 Feb 20 04:17:56 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 20 04:17:56 localhost kernel: SELinux: policy capability always_check_network=0 Feb 20 04:17:56 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 20 04:17:56 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 20 04:17:56 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 20 04:17:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18327 DF PROTO=TCP SPT=55406 DPT=9100 SEQ=793044851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB11830000000001030307) Feb 20 04:17:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58411 DF PROTO=TCP SPT=52062 DPT=9105 SEQ=3345221982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB12A10000000001030307) Feb 20 04:18:00 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Feb 20 04:18:00 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=23 res=1 Feb 20 04:18:00 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Feb 20 04:18:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18329 DF PROTO=TCP SPT=55406 DPT=9100 SEQ=793044851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB1D8D0000000001030307) Feb 20 04:18:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18330 DF PROTO=TCP SPT=55406 DPT=9100 SEQ=793044851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB2D4D0000000001030307) Feb 20 04:18:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:18:05.878 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:18:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:18:05.879 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:18:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:18:05.879 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:18:05 localhost sshd[167806]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:18:06 localhost sshd[167808]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:18:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:18:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:18:07 localhost podman[167811]: 2026-02-20 09:18:07.463728929 +0000 UTC m=+0.092292078 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:18:07 localhost podman[167811]: 2026-02-20 09:18:07.468860186 +0000 UTC m=+0.097423325 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:18:07 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:18:07 localhost podman[167810]: 2026-02-20 09:18:07.554012954 +0000 UTC m=+0.184366211 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127) Feb 20 04:18:07 localhost podman[167810]: 2026-02-20 09:18:07.656774018 +0000 UTC m=+0.287127265 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3) Feb 20 04:18:07 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:18:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40214 DF PROTO=TCP SPT=57652 DPT=9101 SEQ=371607975 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB380D0000000001030307) Feb 20 04:18:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46183 DF PROTO=TCP SPT=47284 DPT=9102 SEQ=2646254151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB40CD0000000001030307) Feb 20 04:18:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58415 DF PROTO=TCP SPT=52062 DPT=9105 SEQ=3345221982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB4E0D0000000001030307) Feb 20 04:18:14 localhost sshd[168434]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:18:14 localhost sshd[168963]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:18:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46185 DF PROTO=TCP SPT=47284 DPT=9102 SEQ=2646254151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB588D0000000001030307) Feb 20 04:18:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55763 DF PROTO=TCP SPT=54978 DPT=9882 SEQ=546178360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB64CD0000000001030307) Feb 20 04:18:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55764 DF PROTO=TCP SPT=54978 DPT=9882 SEQ=546178360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB748D0000000001030307) Feb 20 04:18:24 localhost sshd[175511]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:18:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17655 DF PROTO=TCP SPT=50486 DPT=9100 SEQ=2573258733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB86B30000000001030307) Feb 20 04:18:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41307 DF PROTO=TCP SPT=50858 DPT=9105 SEQ=469935649 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB87D20000000001030307) Feb 20 04:18:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17657 DF PROTO=TCP SPT=50486 DPT=9100 SEQ=2573258733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFB92CD0000000001030307) Feb 20 04:18:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17658 DF PROTO=TCP SPT=50486 DPT=9100 SEQ=2573258733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFBA28D0000000001030307) Feb 20 04:18:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4868 DF PROTO=TCP SPT=51586 DPT=9101 SEQ=494058086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFBAC0D0000000001030307) Feb 20 04:18:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:18:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:18:37 localhost podman[184826]: 2026-02-20 09:18:37.981017711 +0000 UTC m=+0.085601338 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:18:38 localhost podman[184827]: 2026-02-20 09:18:38.060566897 +0000 UTC m=+0.164600368 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible) Feb 20 04:18:38 localhost podman[184826]: 2026-02-20 09:18:38.090718912 +0000 UTC m=+0.195302539 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:18:38 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:18:38 localhost podman[184827]: 2026-02-20 09:18:38.146066442 +0000 UTC m=+0.250099913 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:18:38 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:18:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=542 DF PROTO=TCP SPT=47464 DPT=9102 SEQ=3652531061 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFBB60D0000000001030307) Feb 20 04:18:41 localhost sshd[185150]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:18:42 localhost systemd[1]: Stopping OpenSSH server daemon... Feb 20 04:18:42 localhost systemd[1]: sshd.service: Deactivated successfully. Feb 20 04:18:42 localhost systemd[1]: sshd.service: Unit process 185150 (sshd) remains running after unit stopped. Feb 20 04:18:42 localhost systemd[1]: sshd.service: Unit process 185162 (sshd) remains running after unit stopped. Feb 20 04:18:42 localhost systemd[1]: Stopped OpenSSH server daemon. Feb 20 04:18:42 localhost systemd[1]: sshd.service: Consumed 2.961s CPU time, read 32.0K from disk, written 36.0K to disk. Feb 20 04:18:42 localhost systemd[1]: Stopped target sshd-keygen.target. Feb 20 04:18:42 localhost systemd[1]: Stopping sshd-keygen.target... Feb 20 04:18:42 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 04:18:42 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 04:18:42 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Feb 20 04:18:42 localhost systemd[1]: Reached target sshd-keygen.target. Feb 20 04:18:42 localhost systemd[1]: Starting OpenSSH server daemon... Feb 20 04:18:42 localhost sshd[185712]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:18:42 localhost systemd[1]: Started OpenSSH server daemon. Feb 20 04:18:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:42 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:42 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:42 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:42 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17659 DF PROTO=TCP SPT=50486 DPT=9100 SEQ=2573258733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFBC20D0000000001030307) Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:44 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 04:18:44 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 04:18:44 localhost systemd[1]: Reloading. Feb 20 04:18:44 localhost systemd-sysv-generator[185942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:18:44 localhost systemd-rc-local-generator[185937]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:44 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 20 04:18:44 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 04:18:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=544 DF PROTO=TCP SPT=47464 DPT=9102 SEQ=3652531061 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFBCDCD0000000001030307) Feb 20 04:18:48 localhost python3.9[190621]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 20 04:18:48 localhost systemd[1]: Reloading. Feb 20 04:18:48 localhost systemd-rc-local-generator[190974]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:18:48 localhost systemd-sysv-generator[190978]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:18:48 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:18:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:48 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:48 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:48 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:48 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:48 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52532 DF PROTO=TCP SPT=44906 DPT=9882 SEQ=4160581905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFBDA0D0000000001030307) Feb 20 04:18:49 localhost python3.9[191572]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 20 04:18:49 localhost systemd[1]: Reloading. Feb 20 04:18:49 localhost systemd-rc-local-generator[191951]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:18:49 localhost systemd-sysv-generator[191958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:18:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:18:50 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:50 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:50 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:50 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:50 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:50 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:50 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:51 localhost python3.9[192530]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 20 04:18:51 localhost systemd[1]: Reloading. Feb 20 04:18:51 localhost systemd-rc-local-generator[192625]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:18:51 localhost systemd-sysv-generator[192630]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:18:51 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:18:51 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:51 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:51 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:51 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:51 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:51 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:51 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:52 localhost python3.9[193233]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 20 04:18:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52533 DF PROTO=TCP SPT=44906 DPT=9882 SEQ=4160581905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFBE9CD0000000001030307) Feb 20 04:18:53 localhost systemd[1]: Reloading. Feb 20 04:18:54 localhost systemd-rc-local-generator[193940]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:18:54 localhost systemd-sysv-generator[193946]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:18:54 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:18:54 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:54 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:54 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:54 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:54 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:54 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:54 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:55 localhost python3.9[194443]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:18:55 localhost systemd[1]: Reloading. Feb 20 04:18:55 localhost systemd-sysv-generator[194704]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:18:55 localhost systemd-rc-local-generator[194698]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:18:55 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:18:55 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:55 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:55 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:55 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:55 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:55 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:55 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:56 localhost python3.9[195114]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:18:56 localhost systemd[1]: Reloading. Feb 20 04:18:56 localhost systemd-sysv-generator[195340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:18:56 localhost systemd-rc-local-generator[195336]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:18:56 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:56 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:18:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:56 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:56 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:56 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:56 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:57 localhost python3.9[195715]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:18:57 localhost systemd[1]: Reloading. Feb 20 04:18:57 localhost systemd-rc-local-generator[195880]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:18:57 localhost systemd-sysv-generator[195884]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:18:57 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:57 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:57 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:18:57 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:57 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:57 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:57 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:57 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:18:57 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 04:18:57 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 04:18:57 localhost systemd[1]: man-db-cache-update.service: Consumed 15.889s CPU time. Feb 20 04:18:57 localhost systemd[1]: run-r047caafd42c34d9c945a02d213813007.service: Deactivated successfully. Feb 20 04:18:57 localhost systemd[1]: run-r2b82e4da0fd54f189749b5e9899773e5.service: Deactivated successfully. Feb 20 04:18:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2298 DF PROTO=TCP SPT=55250 DPT=9100 SEQ=504072144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFBFBE40000000001030307) Feb 20 04:18:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4050 DF PROTO=TCP SPT=46400 DPT=9105 SEQ=341072620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFBFD020000000001030307) Feb 20 04:18:58 localhost python3.9[196084]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:18:59 localhost python3.9[196197]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:00 localhost systemd[1]: Reloading. Feb 20 04:19:00 localhost systemd-sysv-generator[196231]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:19:00 localhost systemd-rc-local-generator[196224]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:19:00 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:00 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:00 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:00 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:19:00 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:00 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:00 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:00 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2300 DF PROTO=TCP SPT=55250 DPT=9100 SEQ=504072144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC07CE0000000001030307) Feb 20 04:19:03 localhost python3.9[196346]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 20 04:19:03 localhost systemd[1]: Reloading. Feb 20 04:19:04 localhost systemd-sysv-generator[196379]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:19:04 localhost systemd-rc-local-generator[196376]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:19:04 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:04 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:04 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:04 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:19:04 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:04 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:04 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:04 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:19:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2301 DF PROTO=TCP SPT=55250 DPT=9100 SEQ=504072144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC178E0000000001030307) Feb 20 04:19:05 localhost python3.9[196495]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:05 localhost python3.9[196608]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:19:05.879 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:19:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:19:05.880 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:19:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:19:05.881 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:19:06 localhost python3.9[196721]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:07 localhost python3.9[196834]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56163 DF PROTO=TCP SPT=47918 DPT=9101 SEQ=3747872484 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC220D0000000001030307) Feb 20 04:19:08 localhost python3.9[196947]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:19:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:19:08 localhost podman[196949]: 2026-02-20 09:19:08.286481602 +0000 UTC m=+0.122149315 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Feb 20 04:19:08 localhost podman[196956]: 2026-02-20 09:19:08.34921146 +0000 UTC m=+0.173541721 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Feb 20 04:19:08 localhost podman[196956]: 2026-02-20 09:19:08.357707832 +0000 UTC m=+0.182038063 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Feb 20 04:19:08 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:19:08 localhost podman[196949]: 2026-02-20 09:19:08.377701079 +0000 UTC m=+0.213368782 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 20 04:19:08 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:19:09 localhost python3.9[197100]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:09 localhost python3.9[197213]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16238 DF PROTO=TCP SPT=52220 DPT=9102 SEQ=4066901356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC2B0D0000000001030307) Feb 20 04:19:11 localhost python3.9[197326]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2302 DF PROTO=TCP SPT=55250 DPT=9100 SEQ=504072144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC380D0000000001030307) Feb 20 04:19:14 localhost python3.9[197439]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:14 localhost python3.9[197552]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16240 DF PROTO=TCP SPT=52220 DPT=9102 SEQ=4066901356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC42CD0000000001030307) Feb 20 04:19:16 localhost python3.9[197665]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:17 localhost sshd[197779]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:19:17 localhost python3.9[197778]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:18 localhost python3.9[197893]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:18 localhost python3.9[198006]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Feb 20 04:19:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54863 DF PROTO=TCP SPT=46206 DPT=9882 SEQ=2009421158 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC4F4D0000000001030307) Feb 20 04:19:22 localhost sshd[198043]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:19:22 localhost python3.9[198120]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 20 04:19:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54864 DF PROTO=TCP SPT=46206 DPT=9882 SEQ=2009421158 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC5F0D0000000001030307) Feb 20 04:19:23 localhost python3.9[198231]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 20 04:19:23 localhost python3.9[198341]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:19:25 localhost python3.9[198451]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:19:25 localhost python3.9[198561]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:19:26 localhost python3.9[198671]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 20 04:19:27 localhost python3.9[198779]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:19:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58016 DF PROTO=TCP SPT=40418 DPT=9100 SEQ=10548731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC71130000000001030307) Feb 20 04:19:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61574 DF PROTO=TCP SPT=52072 DPT=9105 SEQ=1856477284 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC72310000000001030307) Feb 20 04:19:28 localhost python3.9[198889]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:29 localhost python3.9[198979]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579167.7303169-1662-59459761562591/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:30 localhost python3.9[199089]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:30 localhost python3.9[199179]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579169.3998826-1662-106454100447136/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58018 DF PROTO=TCP SPT=40418 DPT=9100 SEQ=10548731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC7D0E0000000001030307) Feb 20 04:19:31 localhost python3.9[199289]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:32 localhost python3.9[199379]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579171.0041318-1662-262996707144263/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:32 localhost python3.9[199489]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:33 localhost python3.9[199579]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579172.1341767-1662-166352346215941/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:33 localhost python3.9[199689]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:34 localhost python3.9[199779]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579173.261939-1662-66313967178781/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58019 DF PROTO=TCP SPT=40418 DPT=9100 SEQ=10548731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC8CCD0000000001030307) Feb 20 04:19:35 localhost python3.9[199889]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:35 localhost python3.9[199979]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579174.3643305-1662-74372332881093/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:36 localhost python3.9[200089]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39495 DF PROTO=TCP SPT=54768 DPT=9101 SEQ=885914566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFC960D0000000001030307) Feb 20 04:19:37 localhost python3.9[200177]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579176.1362789-1662-214658695775106/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:38 localhost python3.9[200287]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:19:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:19:39 localhost podman[200378]: 2026-02-20 09:19:39.03154943 +0000 UTC m=+0.078665278 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 20 04:19:39 localhost podman[200379]: 2026-02-20 09:19:39.086659638 +0000 UTC m=+0.130962234 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 20 04:19:39 localhost podman[200378]: 2026-02-20 09:19:39.096753953 +0000 UTC m=+0.143869831 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:19:39 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:19:39 localhost podman[200379]: 2026-02-20 09:19:39.12082636 +0000 UTC m=+0.165129006 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:19:39 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:19:39 localhost python3.9[200377]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579178.011389-1662-9418846659229/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:39 localhost python3.9[200565]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56164 DF PROTO=TCP SPT=47918 DPT=9101 SEQ=3747872484 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFCA00D0000000001030307) Feb 20 04:19:40 localhost sshd[200708]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:19:40 localhost python3.9[200707]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:41 localhost python3.9[200837]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:41 localhost python3.9[200947]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:42 localhost sshd[201058]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:19:42 localhost python3.9[201057]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=547 DF PROTO=TCP SPT=47464 DPT=9102 SEQ=3652531061 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFCAC0D0000000001030307) Feb 20 04:19:42 localhost python3.9[201169]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:43 localhost python3.9[201279]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:44 localhost python3.9[201389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:44 localhost python3.9[201499]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:45 localhost python3.9[201609]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:45 localhost python3.9[201719]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50679 DF PROTO=TCP SPT=45292 DPT=9102 SEQ=3069352236 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFCB80D0000000001030307) Feb 20 04:19:47 localhost python3.9[201829]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:19:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5073 writes, 22K keys, 5073 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5073 writes, 653 syncs, 7.77 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d37d6522d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d37d6522d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Feb 20 04:19:47 localhost python3.9[201939]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:48 localhost python3.9[202049]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12492 DF PROTO=TCP SPT=40070 DPT=9882 SEQ=2995437319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFCC44D0000000001030307) Feb 20 04:19:49 localhost python3.9[202159]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:50 localhost python3.9[202269]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:51 localhost python3.9[202357]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579190.0067317-2325-272005495952673/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:51 localhost python3.9[202467]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:19:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5513 writes, 24K keys, 5513 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5513 writes, 750 syncs, 7.35 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bae83ca2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bae83ca2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Feb 20 04:19:52 localhost python3.9[202555]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579191.196992-2325-129273612992530/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:52 localhost python3.9[202665]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12493 DF PROTO=TCP SPT=40070 DPT=9882 SEQ=2995437319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFCD40D0000000001030307) Feb 20 04:19:53 localhost python3.9[202753]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579192.4392948-2325-48174347173493/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:54 localhost python3.9[202863]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:54 localhost sshd[202950]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:19:54 localhost python3.9[202953]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579193.5590515-2325-24773828451721/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:55 localhost python3.9[203063]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:55 localhost python3.9[203151]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579194.7634153-2325-261945622271707/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:56 localhost python3.9[203261]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:56 localhost python3.9[203349]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579195.9526381-2325-233292322956461/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:57 localhost python3.9[203459]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:19:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48928 DF PROTO=TCP SPT=50954 DPT=9100 SEQ=3739193154 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFCE6440000000001030307) Feb 20 04:19:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44540 DF PROTO=TCP SPT=43676 DPT=9105 SEQ=2451045685 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFCE7610000000001030307) Feb 20 04:19:58 localhost python3.9[203547]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579197.1419415-2325-242945750201366/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:19:59 localhost python3.9[203657]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:00 localhost python3.9[203745]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579199.0607903-2325-238602991110060/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:00 localhost python3.9[203855]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48930 DF PROTO=TCP SPT=50954 DPT=9100 SEQ=3739193154 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFCF24D0000000001030307) Feb 20 04:20:01 localhost python3.9[203943]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579200.166131-2325-272535784484035/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:02 localhost python3.9[204053]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:02 localhost python3.9[204141]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579201.6741188-2325-43248485998960/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:03 localhost python3.9[204251]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:03 localhost python3.9[204339]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579202.7949317-2325-171102599805939/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:04 localhost python3.9[204449]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:04 localhost python3.9[204537]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579203.959145-2325-17049141651737/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48931 DF PROTO=TCP SPT=50954 DPT=9100 SEQ=3739193154 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD020E0000000001030307) Feb 20 04:20:05 localhost python3.9[204647]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:20:05.881 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:20:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:20:05.882 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:20:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:20:05.883 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:20:06 localhost python3.9[204735]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579205.107778-2325-10057512784935/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:06 localhost python3.9[204845]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:07 localhost python3.9[204933]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579206.208015-2325-167836138810949/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11724 DF PROTO=TCP SPT=37828 DPT=9101 SEQ=676657351 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD0C0D0000000001030307) Feb 20 04:20:08 localhost python3.9[205041]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:20:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:20:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:20:09 localhost systemd[1]: tmp-crun.t58tJT.mount: Deactivated successfully. Feb 20 04:20:09 localhost podman[205154]: 2026-02-20 09:20:09.47383059 +0000 UTC m=+0.096508822 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:20:09 localhost podman[205155]: 2026-02-20 09:20:09.51473361 +0000 UTC m=+0.136648519 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 04:20:09 localhost podman[205155]: 2026-02-20 09:20:09.520660155 +0000 UTC m=+0.142575054 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:20:09 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:20:09 localhost podman[205154]: 2026-02-20 09:20:09.538791406 +0000 UTC m=+0.161469638 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:20:09 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:20:09 localhost python3.9[205156]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Feb 20 04:20:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4226 DF PROTO=TCP SPT=34360 DPT=9102 SEQ=955613919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD158E0000000001030307) Feb 20 04:20:10 localhost python3.9[205304]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:20:11 localhost systemd[1]: Reloading. Feb 20 04:20:11 localhost systemd-rc-local-generator[205326]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:20:11 localhost systemd-sysv-generator[205331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:20:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:20:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:11 localhost sshd[205341]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:20:11 localhost systemd[1]: Starting libvirt logging daemon socket... Feb 20 04:20:11 localhost systemd[1]: Listening on libvirt logging daemon socket. Feb 20 04:20:11 localhost systemd[1]: Starting libvirt logging daemon admin socket... Feb 20 04:20:11 localhost systemd[1]: Listening on libvirt logging daemon admin socket. Feb 20 04:20:11 localhost systemd[1]: Starting libvirt logging daemon... Feb 20 04:20:11 localhost systemd[1]: Started libvirt logging daemon. Feb 20 04:20:12 localhost python3.9[205458]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:20:12 localhost systemd[1]: Reloading. Feb 20 04:20:12 localhost systemd-rc-local-generator[205479]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:20:12 localhost systemd-sysv-generator[205484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:20:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:20:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:12 localhost systemd[1]: Starting libvirt nodedev daemon socket... Feb 20 04:20:12 localhost systemd[1]: Listening on libvirt nodedev daemon socket. Feb 20 04:20:12 localhost systemd[1]: Starting libvirt nodedev daemon admin socket... Feb 20 04:20:12 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket... Feb 20 04:20:12 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket. Feb 20 04:20:12 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket. Feb 20 04:20:12 localhost systemd[1]: Started libvirt nodedev daemon. Feb 20 04:20:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48932 DF PROTO=TCP SPT=50954 DPT=9100 SEQ=3739193154 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD220D0000000001030307) Feb 20 04:20:13 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Feb 20 04:20:14 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Feb 20 04:20:14 localhost python3.9[205634]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:20:14 localhost systemd[1]: Reloading. Feb 20 04:20:14 localhost systemd-rc-local-generator[205667]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:20:14 localhost systemd-sysv-generator[205671]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:20:14 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:14 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:14 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:14 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:20:14 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:14 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:14 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:14 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:14 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged. Feb 20 04:20:14 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service. Feb 20 04:20:14 localhost systemd[1]: Starting libvirt proxy daemon socket... Feb 20 04:20:14 localhost systemd[1]: Listening on libvirt proxy daemon socket. Feb 20 04:20:14 localhost systemd[1]: Starting libvirt proxy daemon admin socket... Feb 20 04:20:14 localhost systemd[1]: Starting libvirt proxy daemon read-only socket... Feb 20 04:20:14 localhost systemd[1]: Listening on libvirt proxy daemon admin socket. Feb 20 04:20:14 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket. Feb 20 04:20:14 localhost systemd[1]: Started libvirt proxy daemon. Feb 20 04:20:15 localhost python3.9[205814]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:20:15 localhost systemd[1]: Reloading. Feb 20 04:20:15 localhost setroubleshoot[205603]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5d69b9c8-a672-4a36-bd63-73ea598bc62c Feb 20 04:20:15 localhost setroubleshoot[205603]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Feb 20 04:20:15 localhost setroubleshoot[205603]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5d69b9c8-a672-4a36-bd63-73ea598bc62c Feb 20 04:20:15 localhost setroubleshoot[205603]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Feb 20 04:20:15 localhost systemd-rc-local-generator[205842]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:20:15 localhost systemd-sysv-generator[205846]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:20:15 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:15 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:15 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:15 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:20:15 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:15 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:15 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:15 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:15 localhost systemd[1]: Listening on libvirt locking daemon socket. Feb 20 04:20:15 localhost systemd[1]: Starting libvirt QEMU daemon socket... Feb 20 04:20:15 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 20 04:20:15 localhost systemd[1]: Starting Virtual Machine and Container Registration Service... Feb 20 04:20:15 localhost systemd[1]: Listening on libvirt QEMU daemon socket. Feb 20 04:20:15 localhost systemd[1]: Starting libvirt QEMU daemon admin socket... Feb 20 04:20:15 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket... Feb 20 04:20:15 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket. Feb 20 04:20:15 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket. Feb 20 04:20:15 localhost systemd[1]: Started Virtual Machine and Container Registration Service. Feb 20 04:20:15 localhost systemd[1]: Started libvirt QEMU daemon. Feb 20 04:20:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4228 DF PROTO=TCP SPT=34360 DPT=9102 SEQ=955613919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD2D4D0000000001030307) Feb 20 04:20:16 localhost python3.9[205989]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:20:16 localhost systemd[1]: Reloading. Feb 20 04:20:16 localhost systemd-sysv-generator[206018]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:20:16 localhost systemd-rc-local-generator[206012]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:20:16 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:16 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:16 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:16 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:20:16 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:16 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:16 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:16 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:16 localhost systemd[1]: Starting libvirt secret daemon socket... Feb 20 04:20:16 localhost systemd[1]: Listening on libvirt secret daemon socket. Feb 20 04:20:16 localhost systemd[1]: Starting libvirt secret daemon admin socket... Feb 20 04:20:16 localhost systemd[1]: Starting libvirt secret daemon read-only socket... Feb 20 04:20:16 localhost systemd[1]: Listening on libvirt secret daemon admin socket. Feb 20 04:20:16 localhost systemd[1]: Listening on libvirt secret daemon read-only socket. Feb 20 04:20:16 localhost systemd[1]: Started libvirt secret daemon. Feb 20 04:20:17 localhost python3.9[206160]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:18 localhost python3.9[206270]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 20 04:20:19 localhost python3.9[206380]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:20:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42835 DF PROTO=TCP SPT=34534 DPT=9882 SEQ=355877627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD398D0000000001030307) Feb 20 04:20:20 localhost python3.9[206492]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 20 04:20:21 localhost python3.9[206600]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:22 localhost python3.9[206686]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579220.838945-3189-80278364585202/.source.xml follow=False _original_basename=secret.xml.j2 checksum=e299a5f369c62c832b857708260504de70ea24e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:22 localhost python3.9[206796]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine a8557ee9-b55d-5519-942c-cf8f6172f1d8#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:20:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42836 DF PROTO=TCP SPT=34534 DPT=9882 SEQ=355877627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD494D0000000001030307) Feb 20 04:20:24 localhost python3.9[206916]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:25 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. Feb 20 04:20:25 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Feb 20 04:20:26 localhost python3.9[207253]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:27 localhost python3.9[207363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:27 localhost python3.9[207451]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579226.7523441-3354-27383712605033/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5203 DF PROTO=TCP SPT=40220 DPT=9100 SEQ=3014354757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD5B740000000001030307) Feb 20 04:20:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7920 DF PROTO=TCP SPT=37664 DPT=9105 SEQ=1484161969 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD5C920000000001030307) Feb 20 04:20:28 localhost sshd[207485]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:20:28 localhost python3.9[207563]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:29 localhost python3.9[207673]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:30 localhost python3.9[207730]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:30 localhost python3.9[207840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5205 DF PROTO=TCP SPT=40220 DPT=9100 SEQ=3014354757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD678E0000000001030307) Feb 20 04:20:31 localhost python3.9[207897]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.g_v9ozmg recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:31 localhost python3.9[208007]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:32 localhost python3.9[208064]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:33 localhost python3.9[208174]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:20:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5206 DF PROTO=TCP SPT=40220 DPT=9100 SEQ=3014354757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD774E0000000001030307) Feb 20 04:20:35 localhost python3[208285]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Feb 20 04:20:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3520 DF PROTO=TCP SPT=47982 DPT=9101 SEQ=2830181701 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD800D0000000001030307) Feb 20 04:20:37 localhost python3.9[208395]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:38 localhost python3.9[208452]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:38 localhost sshd[208470]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:20:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:20:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:20:39 localhost systemd[1]: tmp-crun.9rZwrj.mount: Deactivated successfully. Feb 20 04:20:39 localhost podman[208565]: 2026-02-20 09:20:39.680610145 +0000 UTC m=+0.100938345 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:20:39 localhost podman[208566]: 2026-02-20 09:20:39.721482558 +0000 UTC m=+0.139294895 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127) Feb 20 04:20:39 localhost podman[208565]: 2026-02-20 09:20:39.741004934 +0000 UTC m=+0.161333114 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:20:39 localhost python3.9[208564]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:39 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:20:39 localhost podman[208566]: 2026-02-20 09:20:39.793734032 +0000 UTC m=+0.211546399 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:20:39 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:20:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14558 DF PROTO=TCP SPT=60010 DPT=9102 SEQ=825345751 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD8ACD0000000001030307) Feb 20 04:20:40 localhost python3.9[208697]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579238.8787565-3621-8327901821664/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:40 localhost python3.9[208807]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:41 localhost python3.9[208907]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:42 localhost python3.9[209041]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:42 localhost python3.9[209116]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50682 DF PROTO=TCP SPT=45292 DPT=9102 SEQ=3069352236 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFD960E0000000001030307) Feb 20 04:20:44 localhost python3.9[209226]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:44 localhost python3.9[209316]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579243.021962-3738-193892991976880/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:45 localhost python3.9[209426]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14560 DF PROTO=TCP SPT=60010 DPT=9102 SEQ=825345751 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFDA28E0000000001030307) Feb 20 04:20:47 localhost python3.9[209536]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:20:48 localhost python3.9[209649]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:48 localhost python3.9[209759]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:20:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=947 DF PROTO=TCP SPT=44972 DPT=9882 SEQ=2546908758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFDAECD0000000001030307) Feb 20 04:20:49 localhost python3.9[209870]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:20:50 localhost python3.9[209982]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:20:50 localhost python3.9[210095]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:51 localhost python3.9[210205]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:52 localhost sshd[210294]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:20:52 localhost python3.9[210293]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579251.241754-3954-33956450706160/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:52 localhost python3.9[210405]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=948 DF PROTO=TCP SPT=44972 DPT=9882 SEQ=2546908758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFDBE8D0000000001030307) Feb 20 04:20:53 localhost python3.9[210493]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579252.4642627-3999-123646375209464/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:54 localhost python3.9[210603]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:20:54 localhost python3.9[210691]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579253.6754875-4044-207933312268273/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:20:55 localhost python3.9[210801]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:20:55 localhost systemd[1]: Reloading. Feb 20 04:20:55 localhost systemd-sysv-generator[210830]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:20:55 localhost systemd-rc-local-generator[210827]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:20:55 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:55 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:55 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:55 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:20:55 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:55 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:55 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:55 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:56 localhost systemd[1]: Reached target edpm_libvirt.target. Feb 20 04:20:57 localhost python3.9[210951]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None Feb 20 04:20:57 localhost systemd[1]: Reloading. Feb 20 04:20:57 localhost systemd-rc-local-generator[210975]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:20:57 localhost systemd-sysv-generator[210981]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: Reloading. Feb 20 04:20:57 localhost systemd-sysv-generator[211018]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:20:57 localhost systemd-rc-local-generator[211013]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:20:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51369 DF PROTO=TCP SPT=60418 DPT=9100 SEQ=251866053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFDD0A50000000001030307) Feb 20 04:20:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22229 DF PROTO=TCP SPT=44372 DPT=9105 SEQ=4292870976 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFDD1C10000000001030307) Feb 20 04:20:58 localhost systemd[1]: session-52.scope: Deactivated successfully. Feb 20 04:20:58 localhost systemd[1]: session-52.scope: Consumed 3min 24.560s CPU time. Feb 20 04:20:58 localhost systemd-logind[760]: Session 52 logged out. Waiting for processes to exit. Feb 20 04:20:58 localhost systemd-logind[760]: Removed session 52. Feb 20 04:21:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51371 DF PROTO=TCP SPT=60418 DPT=9100 SEQ=251866053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFDDCCE0000000001030307) Feb 20 04:21:03 localhost sshd[211042]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:21:04 localhost systemd-logind[760]: New session 53 of user zuul. Feb 20 04:21:04 localhost systemd[1]: Started Session 53 of User zuul. Feb 20 04:21:04 localhost python3.9[211153]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:21:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51372 DF PROTO=TCP SPT=60418 DPT=9100 SEQ=251866053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFDEC8D0000000001030307) Feb 20 04:21:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:21:05.883 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:21:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:21:05.884 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:21:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:21:05.885 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:21:06 localhost python3.9[211265]: ansible-ansible.builtin.service_facts Invoked Feb 20 04:21:06 localhost network[211282]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 04:21:06 localhost network[211283]: 'network-scripts' will be removed from distribution in near future. Feb 20 04:21:06 localhost network[211284]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 04:21:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61053 DF PROTO=TCP SPT=41620 DPT=9101 SEQ=3437375519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFDF60E0000000001030307) Feb 20 04:21:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19462 DF PROTO=TCP SPT=42500 DPT=9102 SEQ=1033720459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFDFFCE0000000001030307) Feb 20 04:21:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:21:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:21:10 localhost podman[211307]: 2026-02-20 09:21:10.441471941 +0000 UTC m=+0.076451015 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent) Feb 20 04:21:10 localhost podman[211307]: 2026-02-20 09:21:10.447507746 +0000 UTC m=+0.082486760 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:21:10 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:21:10 localhost podman[211306]: 2026-02-20 09:21:10.500506411 +0000 UTC m=+0.134526278 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller) Feb 20 04:21:10 localhost podman[211306]: 2026-02-20 09:21:10.595476382 +0000 UTC m=+0.229496189 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3) Feb 20 04:21:10 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:21:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:21:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4231 DF PROTO=TCP SPT=34360 DPT=9102 SEQ=955613919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE0C0E0000000001030307) Feb 20 04:21:13 localhost python3.9[211559]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:21:14 localhost python3.9[211622]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:21:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19464 DF PROTO=TCP SPT=42500 DPT=9102 SEQ=1033720459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE178E0000000001030307) Feb 20 04:21:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55540 DF PROTO=TCP SPT=39360 DPT=9882 SEQ=3603975097 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE240D0000000001030307) Feb 20 04:21:22 localhost python3.9[211734]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:21:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55541 DF PROTO=TCP SPT=39360 DPT=9882 SEQ=3603975097 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE33CD0000000001030307) Feb 20 04:21:23 localhost python3.9[211846]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:21:24 localhost python3.9[211956]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:21:24 localhost python3.9[212067]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:21:25 localhost python3.9[212178]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:21:26 localhost sshd[212290]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:21:26 localhost python3.9[212289]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:21:27 localhost python3.9[212403]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:21:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48191 DF PROTO=TCP SPT=59096 DPT=9100 SEQ=3093438793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE45D30000000001030307) Feb 20 04:21:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15659 DF PROTO=TCP SPT=44194 DPT=9105 SEQ=3283919320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE46F20000000001030307) Feb 20 04:21:28 localhost python3.9[212513]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:21:28 localhost systemd[1]: Listening on Open-iSCSI iscsid Socket. Feb 20 04:21:30 localhost python3.9[212627]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:21:30 localhost systemd[1]: Reloading. Feb 20 04:21:30 localhost systemd-rc-local-generator[212656]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:21:30 localhost systemd-sysv-generator[212660]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:21:30 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:30 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:30 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:30 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:21:30 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:30 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:30 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:30 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:30 localhost systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi). Feb 20 04:21:30 localhost systemd[1]: Starting Open-iSCSI... Feb 20 04:21:30 localhost iscsid[212668]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 20 04:21:30 localhost iscsid[212668]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 20 04:21:30 localhost iscsid[212668]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 20 04:21:30 localhost iscsid[212668]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 20 04:21:30 localhost iscsid[212668]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 20 04:21:30 localhost iscsid[212668]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 20 04:21:30 localhost iscsid[212668]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf Feb 20 04:21:30 localhost systemd[1]: Started Open-iSCSI. Feb 20 04:21:30 localhost systemd[1]: Starting Logout off all iSCSI sessions on shutdown... Feb 20 04:21:30 localhost systemd[1]: Finished Logout off all iSCSI sessions on shutdown. Feb 20 04:21:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48193 DF PROTO=TCP SPT=59096 DPT=9100 SEQ=3093438793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE51CD0000000001030307) Feb 20 04:21:32 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Feb 20 04:21:32 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Feb 20 04:21:32 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service. Feb 20 04:21:33 localhost python3.9[212791]: ansible-ansible.builtin.service_facts Invoked Feb 20 04:21:33 localhost network[212810]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 04:21:33 localhost network[212811]: 'network-scripts' will be removed from distribution in near future. Feb 20 04:21:33 localhost network[212812]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 04:21:33 localhost sshd[212818]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:21:33 localhost setroubleshoot[212687]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 23458b69-b0ef-4b8c-86d1-ffe946220458 Feb 20 04:21:33 localhost setroubleshoot[212687]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 20 04:21:33 localhost setroubleshoot[212687]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 23458b69-b0ef-4b8c-86d1-ffe946220458 Feb 20 04:21:33 localhost setroubleshoot[212687]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 20 04:21:33 localhost setroubleshoot[212687]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 23458b69-b0ef-4b8c-86d1-ffe946220458 Feb 20 04:21:33 localhost setroubleshoot[212687]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 20 04:21:33 localhost setroubleshoot[212687]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 23458b69-b0ef-4b8c-86d1-ffe946220458 Feb 20 04:21:33 localhost setroubleshoot[212687]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 20 04:21:33 localhost setroubleshoot[212687]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 23458b69-b0ef-4b8c-86d1-ffe946220458 Feb 20 04:21:33 localhost setroubleshoot[212687]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 20 04:21:33 localhost setroubleshoot[212687]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 23458b69-b0ef-4b8c-86d1-ffe946220458 Feb 20 04:21:33 localhost setroubleshoot[212687]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Feb 20 04:21:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:21:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48194 DF PROTO=TCP SPT=59096 DPT=9100 SEQ=3093438793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE618D0000000001030307) Feb 20 04:21:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51513 DF PROTO=TCP SPT=47368 DPT=9101 SEQ=3086964156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE6C0D0000000001030307) Feb 20 04:21:37 localhost sshd[213019]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:21:38 localhost python3.9[213048]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:21:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3037 DF PROTO=TCP SPT=49586 DPT=9102 SEQ=799191722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE750E0000000001030307) Feb 20 04:21:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:21:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:21:41 localhost podman[213055]: 2026-02-20 09:21:41.441546202 +0000 UTC m=+0.080109282 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 20 04:21:41 localhost podman[213055]: 2026-02-20 09:21:41.49081323 +0000 UTC m=+0.129376310 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 20 04:21:41 localhost podman[213056]: 2026-02-20 09:21:41.503602791 +0000 UTC m=+0.141366341 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 20 04:21:41 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:21:41 localhost podman[213056]: 2026-02-20 09:21:41.515855519 +0000 UTC m=+0.153619029 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Feb 20 04:21:41 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:21:42 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 04:21:42 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 04:21:42 localhost systemd[1]: Reloading. Feb 20 04:21:42 localhost systemd-sysv-generator[213143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:21:42 localhost systemd-rc-local-generator[213138]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:21:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:21:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:21:42 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 20 04:21:42 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 04:21:42 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 04:21:42 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 04:21:42 localhost systemd[1]: run-r884a252794f64c31b465679818e0a7a7.service: Deactivated successfully. Feb 20 04:21:42 localhost systemd[1]: run-r1f44a85bbc8a4e6e8422be6444d69ba4.service: Deactivated successfully. Feb 20 04:21:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48195 DF PROTO=TCP SPT=59096 DPT=9100 SEQ=3093438793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE820D0000000001030307) Feb 20 04:21:43 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully. Feb 20 04:21:43 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Feb 20 04:21:44 localhost python3.9[213450]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 20 04:21:45 localhost python3.9[213560]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Feb 20 04:21:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3039 DF PROTO=TCP SPT=49586 DPT=9102 SEQ=799191722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE8CCD0000000001030307) Feb 20 04:21:46 localhost python3.9[213674]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:21:46 localhost python3.9[213780]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579305.7291596-483-77791782714900/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:21:47 localhost python3.9[213890]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:21:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61300 DF PROTO=TCP SPT=33638 DPT=9882 SEQ=3339931288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFE990D0000000001030307) Feb 20 04:21:49 localhost python3.9[214000]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:21:50 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 20 04:21:50 localhost systemd[1]: Stopped Load Kernel Modules. Feb 20 04:21:50 localhost systemd[1]: Stopping Load Kernel Modules... Feb 20 04:21:50 localhost systemd[1]: Starting Load Kernel Modules... Feb 20 04:21:50 localhost systemd-modules-load[214004]: Module 'msr' is built in Feb 20 04:21:50 localhost systemd[1]: Finished Load Kernel Modules. Feb 20 04:21:50 localhost python3.9[214114]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:21:52 localhost python3.9[214225]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:21:52 localhost python3.9[214335]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:21:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61301 DF PROTO=TCP SPT=33638 DPT=9882 SEQ=3339931288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFEA8CD0000000001030307) Feb 20 04:21:53 localhost python3.9[214423]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579312.444978-636-27627473576122/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:21:54 localhost python3.9[214533]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:21:54 localhost python3.9[214644]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:21:55 localhost python3.9[214754]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:21:56 localhost python3.9[214864]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:21:57 localhost python3.9[214974]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:21:57 localhost python3.9[215084]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:21:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22408 DF PROTO=TCP SPT=39218 DPT=9100 SEQ=4192216107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFEBB030000000001030307) Feb 20 04:21:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59512 DF PROTO=TCP SPT=57116 DPT=9105 SEQ=487870439 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFEBC220000000001030307) Feb 20 04:21:58 localhost python3.9[215194]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:21:59 localhost python3.9[215304]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:00 localhost python3.9[215414]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:22:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22410 DF PROTO=TCP SPT=39218 DPT=9100 SEQ=4192216107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFEC70E0000000001030307) Feb 20 04:22:01 localhost python3.9[215526]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:22:02 localhost python3.9[215637]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:22:03 localhost systemd[1]: Listening on multipathd control socket. Feb 20 04:22:04 localhost python3.9[215751]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:22:04 localhost systemd[1]: Starting Wait for udev To Complete Device Initialization... Feb 20 04:22:04 localhost udevadm[215756]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in. Feb 20 04:22:04 localhost systemd[1]: Finished Wait for udev To Complete Device Initialization. Feb 20 04:22:04 localhost systemd[1]: Starting Device-Mapper Multipath Device Controller... Feb 20 04:22:04 localhost multipathd[215759]: --------start up-------- Feb 20 04:22:04 localhost multipathd[215759]: read /etc/multipath.conf Feb 20 04:22:04 localhost multipathd[215759]: path checkers start up Feb 20 04:22:04 localhost systemd[1]: Started Device-Mapper Multipath Device Controller. Feb 20 04:22:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22411 DF PROTO=TCP SPT=39218 DPT=9100 SEQ=4192216107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFED6CE0000000001030307) Feb 20 04:22:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:22:05.884 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:22:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:22:05.885 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:22:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:22:05.886 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:22:06 localhost python3.9[215877]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 20 04:22:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2749 DF PROTO=TCP SPT=44692 DPT=9101 SEQ=1469707317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFEE00D0000000001030307) Feb 20 04:22:07 localhost python3.9[215987]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Feb 20 04:22:08 localhost python3.9[216106]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:22:08 localhost python3.9[216194]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579327.8495708-1026-106451770101541/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51514 DF PROTO=TCP SPT=47368 DPT=9101 SEQ=3086964156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFEEA0D0000000001030307) Feb 20 04:22:10 localhost python3.9[216304]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:10 localhost python3.9[216414]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:22:10 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 20 04:22:10 localhost systemd[1]: Stopped Load Kernel Modules. Feb 20 04:22:10 localhost systemd[1]: Stopping Load Kernel Modules... Feb 20 04:22:10 localhost systemd[1]: Starting Load Kernel Modules... Feb 20 04:22:10 localhost systemd-modules-load[216418]: Module 'msr' is built in Feb 20 04:22:10 localhost systemd[1]: Finished Load Kernel Modules. Feb 20 04:22:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:22:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:22:12 localhost systemd[1]: tmp-crun.24eKNI.mount: Deactivated successfully. Feb 20 04:22:12 localhost podman[216530]: 2026-02-20 09:22:12.200784968 +0000 UTC m=+0.091669602 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:22:12 localhost podman[216530]: 2026-02-20 09:22:12.233524231 +0000 UTC m=+0.124408825 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent) Feb 20 04:22:12 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:22:12 localhost podman[216529]: 2026-02-20 09:22:12.24225196 +0000 UTC m=+0.133198767 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Feb 20 04:22:12 localhost podman[216529]: 2026-02-20 09:22:12.308827032 +0000 UTC m=+0.199773869 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Feb 20 04:22:12 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:22:12 localhost python3.9[216528]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:22:12 localhost systemd[1]: virtnodedevd.service: Deactivated successfully. Feb 20 04:22:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19467 DF PROTO=TCP SPT=42500 DPT=9102 SEQ=1033720459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFEF60D0000000001030307) Feb 20 04:22:14 localhost systemd[1]: virtproxyd.service: Deactivated successfully. Feb 20 04:22:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27688 DF PROTO=TCP SPT=45624 DPT=9102 SEQ=2156266743 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF020D0000000001030307) Feb 20 04:22:16 localhost systemd[1]: Reloading. Feb 20 04:22:16 localhost systemd-rc-local-generator[216603]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:22:16 localhost systemd-sysv-generator[216607]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: Reloading. Feb 20 04:22:16 localhost systemd-rc-local-generator[216643]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:22:16 localhost systemd-sysv-generator[216646]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:16 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Feb 20 04:22:16 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 20 04:22:16 localhost lvm[216698]: PV /dev/loop3 online, VG ceph_vg0 is complete. Feb 20 04:22:16 localhost lvm[216697]: PV /dev/loop4 online, VG ceph_vg1 is complete. Feb 20 04:22:16 localhost lvm[216697]: VG ceph_vg1 finished Feb 20 04:22:16 localhost lvm[216698]: VG ceph_vg0 finished Feb 20 04:22:17 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Feb 20 04:22:17 localhost systemd[1]: Starting man-db-cache-update.service... Feb 20 04:22:17 localhost systemd[1]: Reloading. Feb 20 04:22:17 localhost systemd-rc-local-generator[216746]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:22:17 localhost systemd-sysv-generator[216749]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:22:17 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:17 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:17 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:17 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:22:17 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:17 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:17 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:17 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:17 localhost systemd[1]: Queuing reload/restart jobs for marked units… Feb 20 04:22:18 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Feb 20 04:22:18 localhost systemd[1]: Finished man-db-cache-update.service. Feb 20 04:22:18 localhost systemd[1]: man-db-cache-update.service: Consumed 1.372s CPU time. Feb 20 04:22:18 localhost systemd[1]: run-rfe98960363774f01a7edfc3e38d0b44a.service: Deactivated successfully. Feb 20 04:22:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32676 DF PROTO=TCP SPT=39134 DPT=9882 SEQ=835294228 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF0E4D0000000001030307) Feb 20 04:22:19 localhost python3.9[218003]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:22:19 localhost systemd[1]: Stopping Device-Mapper Multipath Device Controller... Feb 20 04:22:19 localhost multipathd[215759]: exit (signal) Feb 20 04:22:19 localhost multipathd[215759]: --------shut down------- Feb 20 04:22:19 localhost systemd[1]: multipathd.service: Deactivated successfully. Feb 20 04:22:19 localhost systemd[1]: Stopped Device-Mapper Multipath Device Controller. Feb 20 04:22:19 localhost systemd[1]: Starting Device-Mapper Multipath Device Controller... Feb 20 04:22:19 localhost multipathd[218009]: --------start up-------- Feb 20 04:22:19 localhost multipathd[218009]: read /etc/multipath.conf Feb 20 04:22:19 localhost multipathd[218009]: path checkers start up Feb 20 04:22:19 localhost systemd[1]: Started Device-Mapper Multipath Device Controller. Feb 20 04:22:19 localhost sshd[218018]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:22:20 localhost python3.9[218127]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:22:21 localhost python3.9[218241]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32677 DF PROTO=TCP SPT=39134 DPT=9882 SEQ=835294228 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF1E0E0000000001030307) Feb 20 04:22:23 localhost python3.9[218351]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 04:22:23 localhost systemd[1]: Reloading. Feb 20 04:22:23 localhost systemd-rc-local-generator[218374]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:22:23 localhost systemd-sysv-generator[218379]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:22:23 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:23 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:23 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:23 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:22:23 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:23 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:23 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:23 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:24 localhost python3.9[218495]: ansible-ansible.builtin.service_facts Invoked Feb 20 04:22:24 localhost network[218512]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 04:22:24 localhost network[218513]: 'network-scripts' will be removed from distribution in near future. Feb 20 04:22:24 localhost network[218514]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 04:22:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:22:25 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Feb 20 04:22:25 localhost systemd[1]: virtqemud.service: Deactivated successfully. Feb 20 04:22:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37399 DF PROTO=TCP SPT=44048 DPT=9100 SEQ=4294676183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF30370000000001030307) Feb 20 04:22:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29665 DF PROTO=TCP SPT=40172 DPT=9105 SEQ=4071761141 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF31510000000001030307) Feb 20 04:22:30 localhost python3.9[218750]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:22:30 localhost python3.9[218861]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:22:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37401 DF PROTO=TCP SPT=44048 DPT=9100 SEQ=4294676183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF3C4E0000000001030307) Feb 20 04:22:31 localhost python3.9[218972]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:22:32 localhost python3.9[219083]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:22:33 localhost python3.9[219194]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:22:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37402 DF PROTO=TCP SPT=44048 DPT=9100 SEQ=4294676183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF4C0D0000000001030307) Feb 20 04:22:35 localhost python3.9[219305]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:22:36 localhost python3.9[219416]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:22:36 localhost python3.9[219527]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:22:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64143 DF PROTO=TCP SPT=43776 DPT=9101 SEQ=3789937870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF560D0000000001030307) Feb 20 04:22:38 localhost sshd[219546]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:22:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1816 DF PROTO=TCP SPT=50274 DPT=9102 SEQ=2570687140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF5F8D0000000001030307) Feb 20 04:22:40 localhost python3.9[219640]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:41 localhost python3.9[219750]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:41 localhost python3.9[219860]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:22:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:22:42 localhost python3.9[219970]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:42 localhost podman[219971]: 2026-02-20 09:22:42.45634006 +0000 UTC m=+0.085384476 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2) Feb 20 04:22:42 localhost systemd[1]: tmp-crun.p30qVd.mount: Deactivated successfully. Feb 20 04:22:42 localhost podman[219972]: 2026-02-20 09:22:42.512840631 +0000 UTC m=+0.142029111 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:22:42 localhost podman[219972]: 2026-02-20 09:22:42.543094771 +0000 UTC m=+0.172283241 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent) Feb 20 04:22:42 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:22:42 localhost podman[219971]: 2026-02-20 09:22:42.566672872 +0000 UTC m=+0.195717348 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 20 04:22:42 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:22:43 localhost python3.9[220123]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3042 DF PROTO=TCP SPT=49586 DPT=9102 SEQ=799191722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF6C0D0000000001030307) Feb 20 04:22:43 localhost python3.9[220233]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:44 localhost python3.9[220343]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:45 localhost python3.9[220453]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1818 DF PROTO=TCP SPT=50274 DPT=9102 SEQ=2570687140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF774D0000000001030307) Feb 20 04:22:47 localhost python3.9[220599]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:47 localhost python3.9[220767]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:48 localhost python3.9[220909]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:48 localhost python3.9[221037]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4616 DF PROTO=TCP SPT=46658 DPT=9882 SEQ=4095475891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF838D0000000001030307) Feb 20 04:22:49 localhost python3.9[221147]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:49 localhost python3.9[221257]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:50 localhost python3.9[221367]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:51 localhost python3.9[221477]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:22:51 localhost sshd[221495]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:22:51 localhost sshd[221589]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:22:52 localhost python3.9[221588]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:22:52 localhost python3.9[221701]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 20 04:22:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4617 DF PROTO=TCP SPT=46658 DPT=9882 SEQ=4095475891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFF934D0000000001030307) Feb 20 04:22:53 localhost python3.9[221811]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 04:22:53 localhost systemd[1]: Reloading. Feb 20 04:22:53 localhost systemd-rc-local-generator[221834]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:22:53 localhost systemd-sysv-generator[221837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:22:53 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:53 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:53 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:53 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:54 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:22:54 localhost python3.9[221957]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:22:55 localhost sshd[222069]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:22:55 localhost python3.9[222068]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:22:56 localhost python3.9[222181]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:22:57 localhost python3.9[222292]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:22:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64902 DF PROTO=TCP SPT=59772 DPT=9100 SEQ=2195230917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFFA5630000000001030307) Feb 20 04:22:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49884 DF PROTO=TCP SPT=50092 DPT=9105 SEQ=3300938330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFFA6820000000001030307) Feb 20 04:22:59 localhost python3.9[222403]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:22:59 localhost python3.9[222514]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:23:00 localhost python3.9[222625]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:23:00 localhost python3.9[222736]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:23:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64904 DF PROTO=TCP SPT=59772 DPT=9100 SEQ=2195230917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFFB14E0000000001030307) Feb 20 04:23:02 localhost python3.9[222847]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:03 localhost python3.9[222957]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:04 localhost python3.9[223067]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:04 localhost python3.9[223177]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64905 DF PROTO=TCP SPT=59772 DPT=9100 SEQ=2195230917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFFC10D0000000001030307) Feb 20 04:23:05 localhost python3.9[223287]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:23:05.885 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:23:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:23:05.886 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:23:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:23:05.886 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:23:06 localhost python3.9[223397]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:06 localhost python3.9[223507]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46501 DF PROTO=TCP SPT=40264 DPT=9101 SEQ=564991183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFFCA0D0000000001030307) Feb 20 04:23:07 localhost python3.9[223617]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:08 localhost python3.9[223727]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17181 DF PROTO=TCP SPT=32820 DPT=9102 SEQ=2753331657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFFD48D0000000001030307) Feb 20 04:23:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27691 DF PROTO=TCP SPT=45624 DPT=9102 SEQ=2156266743 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFFE00E0000000001030307) Feb 20 04:23:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:23:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:23:13 localhost podman[223745]: 2026-02-20 09:23:13.463212267 +0000 UTC m=+0.087495150 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:23:13 localhost podman[223746]: 2026-02-20 09:23:13.515367966 +0000 UTC m=+0.139683070 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Feb 20 04:23:13 localhost podman[223746]: 2026-02-20 09:23:13.554798871 +0000 UTC m=+0.179113985 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:23:13 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:23:13 localhost podman[223745]: 2026-02-20 09:23:13.576467253 +0000 UTC m=+0.200750126 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:23:13 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:23:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17183 DF PROTO=TCP SPT=32820 DPT=9102 SEQ=2753331657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFFEC4D0000000001030307) Feb 20 04:23:16 localhost python3.9[223877]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Feb 20 04:23:17 localhost python3.9[223988]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Feb 20 04:23:18 localhost python3.9[224104]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005625202.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Feb 20 04:23:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32969 DF PROTO=TCP SPT=53852 DPT=9882 SEQ=376888308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3AFFF8CE0000000001030307) Feb 20 04:23:20 localhost sshd[224130]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:23:20 localhost systemd-logind[760]: New session 54 of user zuul. Feb 20 04:23:20 localhost systemd[1]: Started Session 54 of User zuul. Feb 20 04:23:20 localhost systemd[1]: session-54.scope: Deactivated successfully. Feb 20 04:23:20 localhost systemd-logind[760]: Session 54 logged out. Waiting for processes to exit. Feb 20 04:23:20 localhost systemd-logind[760]: Removed session 54. Feb 20 04:23:21 localhost python3.9[224241]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:23:22 localhost python3.9[224296]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:22 localhost python3.9[224404]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:23:23 localhost python3.9[224490]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579402.1800506-2628-55405479719679/.source _original_basename=ssh-config follow=False checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32970 DF PROTO=TCP SPT=53852 DPT=9882 SEQ=376888308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B00088D0000000001030307) Feb 20 04:23:23 localhost python3.9[224598]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:23:24 localhost python3.9[224684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579403.28146-2628-245109030722171/.source.py _original_basename=nova_statedir_ownership.py follow=False checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:24 localhost python3.9[224792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:23:25 localhost python3.9[224878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579404.3804283-2628-170176828367920/.source _original_basename=run-on-host follow=False checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:26 localhost python3.9[224986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:23:26 localhost sshd[225073]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:23:27 localhost python3.9[225072]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579405.7020314-2790-12294486845078/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=9126091aab7eef145bc487e7e4a566b4a9e47220 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:27 localhost python3.9[225184]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:23:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48236 DF PROTO=TCP SPT=60582 DPT=9100 SEQ=1703201647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B001A930000000001030307) Feb 20 04:23:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48886 DF PROTO=TCP SPT=45428 DPT=9105 SEQ=4073352296 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B001BB20000000001030307) Feb 20 04:23:28 localhost python3.9[225294]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:23:29 localhost python3.9[225404]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:23:29 localhost sshd[225516]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:23:29 localhost python3.9[225517]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:23:30 localhost python3.9[225626]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:23:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48238 DF PROTO=TCP SPT=60582 DPT=9100 SEQ=1703201647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B00268D0000000001030307) Feb 20 04:23:32 localhost python3.9[225738]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:23:32 localhost python3.9[225848]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:33 localhost python3.9[225956]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute_init state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:23:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48239 DF PROTO=TCP SPT=60582 DPT=9100 SEQ=1703201647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B00364E0000000001030307) Feb 20 04:23:36 localhost python3.9[226260]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute_init config_pattern=*.json debug=False Feb 20 04:23:37 localhost python3.9[226370]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 20 04:23:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50176 DF PROTO=TCP SPT=41322 DPT=9101 SEQ=2838321824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B00400D0000000001030307) Feb 20 04:23:38 localhost python3[226480]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute_init config_id=nova_compute_init config_overrides={} config_patterns=*.json containers=['nova_compute_init'] log_base_path=/var/log/containers/stdouts debug=False Feb 20 04:23:38 localhost podman[226518]: Feb 20 04:23:38 localhost podman[226518]: 2026-02-20 09:23:38.754154947 +0000 UTC m=+0.076116846 container create 66af039b890df51100ccb41f4acf5517eb836b613e1f9e398f4f08e1ae1ca156 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible) Feb 20 04:23:38 localhost podman[226518]: 2026-02-20 09:23:38.716157895 +0000 UTC m=+0.038119794 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 20 04:23:38 localhost python3[226480]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --env EDPM_CONFIG_HASH=ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71 --label config_id=nova_compute_init --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Feb 20 04:23:39 localhost python3.9[226663]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:23:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5416 DF PROTO=TCP SPT=44394 DPT=9102 SEQ=2797897554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0049CD0000000001030307) Feb 20 04:23:40 localhost python3.9[226773]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 20 04:23:41 localhost sshd[226807]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:23:42 localhost python3.9[226885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:23:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48240 DF PROTO=TCP SPT=60582 DPT=9100 SEQ=1703201647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B00560D0000000001030307) Feb 20 04:23:43 localhost python3.9[226975]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579421.3531377-3261-86794614096900/.source.yaml _original_basename=.m0cf9km6 follow=False checksum=201984e070e9869531933fce67c78d3ce61bb83b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:23:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:23:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:23:44 localhost systemd[1]: tmp-crun.yQen7Q.mount: Deactivated successfully. Feb 20 04:23:44 localhost podman[226993]: 2026-02-20 09:23:44.461929214 +0000 UTC m=+0.096386824 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:23:44 localhost podman[226993]: 2026-02-20 09:23:44.549615575 +0000 UTC m=+0.184073195 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 20 04:23:44 localhost podman[226994]: 2026-02-20 09:23:44.556779348 +0000 UTC m=+0.190653763 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:23:44 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:23:44 localhost podman[226994]: 2026-02-20 09:23:44.587259837 +0000 UTC m=+0.221134212 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Feb 20 04:23:44 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:23:45 localhost python3.9[227128]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:23:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5418 DF PROTO=TCP SPT=44394 DPT=9102 SEQ=2797897554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B00618D0000000001030307) Feb 20 04:23:46 localhost python3.9[227238]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:23:47 localhost python3.9[227348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:23:47 localhost python3.9[227438]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/nova_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579426.5504684-3360-276774857794678/.source.json _original_basename=.2e4l7ooy follow=False checksum=0018389a48392615f4a8869cad43008a907328ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:23:48 localhost python3.9[227546]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:23:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22948 DF PROTO=TCP SPT=37992 DPT=9882 SEQ=194786953 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B006DCD0000000001030307) Feb 20 04:23:51 localhost python3.9[227937]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute config_pattern=*.json debug=False Feb 20 04:23:52 localhost python3.9[228047]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 20 04:23:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22949 DF PROTO=TCP SPT=37992 DPT=9882 SEQ=194786953 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B007D8D0000000001030307) Feb 20 04:23:53 localhost python3[228157]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute config_id=nova_compute config_overrides={} config_patterns=*.json containers=['nova_compute'] log_base_path=/var/log/containers/stdouts debug=False Feb 20 04:23:53 localhost python3[228157]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd",#012 "Digest": "sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-01-30T06:31:38.534497001Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1214548351,#012 "VirtualSize": 1214548351,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",#012 "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",#012 "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",#012 "sha256:6cac2e473d63cf2a9b8ef2ea3f4fbc7fb780c57021c3588efd56da3aa8cf8843",#012 "sha256:927dd86a09392106af537557be80232b7e8ca154daa00857c24fe20f9e550a50"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2026-01-28T05:56:51.126388624Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:51.126459235Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260127\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:53.726938221Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-01-30T06:10:18.890429494Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890534417Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890553228Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890570688Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890616649Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890659121Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:19.232761948Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:52.670543613Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Feb 20 04:23:53 localhost podman[228207]: 2026-02-20 09:23:53.890833841 +0000 UTC m=+0.083272679 container remove d175ea37d783b899b05770302929cbca8c9c7cc91a2fbb36b5618df02d988628 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'df79bec7915db2c2cb15f0a47bf8984d-ca9e756af36a4b8ed088db0b68d5c381'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe) Feb 20 04:23:53 localhost python3[228157]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Feb 20 04:23:53 localhost podman[228221]: Feb 20 04:23:53 localhost podman[228221]: 2026-02-20 09:23:53.997362193 +0000 UTC m=+0.088269386 container create 1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:23:53 localhost podman[228221]: 2026-02-20 09:23:53.955793301 +0000 UTC m=+0.046700534 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 20 04:23:54 localhost python3[228157]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71 --label config_id=nova_compute --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Feb 20 04:23:55 localhost python3.9[228370]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:23:56 localhost python3.9[228482]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:23:56 localhost python3.9[228537]: ansible-stat Invoked with path=/etc/systemd/system/edpm_nova_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:23:57 localhost python3.9[228646]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771579436.725288-3594-206824945355859/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:23:57 localhost python3.9[228701]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 04:23:57 localhost systemd[1]: Reloading. Feb 20 04:23:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15939 DF PROTO=TCP SPT=38816 DPT=9100 SEQ=1980053403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B008FC50000000001030307) Feb 20 04:23:57 localhost systemd-rc-local-generator[228725]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:23:57 localhost systemd-sysv-generator[228729]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:23:58 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:58 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:58 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:58 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:23:58 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:58 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:58 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:58 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65179 DF PROTO=TCP SPT=49244 DPT=9105 SEQ=1421323429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0090E10000000001030307) Feb 20 04:23:58 localhost python3.9[228792]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:23:58 localhost systemd[1]: Reloading. Feb 20 04:23:58 localhost systemd-rc-local-generator[228817]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:23:58 localhost systemd-sysv-generator[228822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:23:59 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:59 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:59 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:59 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:23:59 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:59 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:59 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:59 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:23:59 localhost systemd[1]: Starting nova_compute container... Feb 20 04:23:59 localhost systemd[1]: Started libcrun container. Feb 20 04:23:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05ec5658aa743546307022d9fca1129054ee3b9d31d4b797362fdf17ebc9ce/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Feb 20 04:23:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05ec5658aa743546307022d9fca1129054ee3b9d31d4b797362fdf17ebc9ce/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Feb 20 04:23:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05ec5658aa743546307022d9fca1129054ee3b9d31d4b797362fdf17ebc9ce/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 20 04:23:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05ec5658aa743546307022d9fca1129054ee3b9d31d4b797362fdf17ebc9ce/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 04:23:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05ec5658aa743546307022d9fca1129054ee3b9d31d4b797362fdf17ebc9ce/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 04:23:59 localhost podman[228833]: 2026-02-20 09:23:59.349927663 +0000 UTC m=+0.126443222 container init 1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}) Feb 20 04:23:59 localhost podman[228833]: 2026-02-20 09:23:59.35958226 +0000 UTC m=+0.136097819 container start 1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 20 04:23:59 localhost podman[228833]: nova_compute Feb 20 04:23:59 localhost nova_compute[228847]: + sudo -E kolla_set_configs Feb 20 04:23:59 localhost systemd[1]: Started nova_compute container. Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Validating config file Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Copying service configuration files Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Deleting /etc/nova/nova.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /etc/nova/nova.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Copying /var/lib/kolla/config_files/src/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Deleting /etc/ceph Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Creating directory /etc/ceph Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /etc/ceph Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Deleting /usr/sbin/iscsiadm Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Writing out command to execute Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:23:59 localhost nova_compute[228847]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 20 04:23:59 localhost nova_compute[228847]: ++ cat /run_command Feb 20 04:23:59 localhost nova_compute[228847]: + CMD=nova-compute Feb 20 04:23:59 localhost nova_compute[228847]: + ARGS= Feb 20 04:23:59 localhost nova_compute[228847]: + sudo kolla_copy_cacerts Feb 20 04:23:59 localhost nova_compute[228847]: + [[ ! -n '' ]] Feb 20 04:23:59 localhost nova_compute[228847]: + . kolla_extend_start Feb 20 04:23:59 localhost nova_compute[228847]: Running command: 'nova-compute' Feb 20 04:23:59 localhost nova_compute[228847]: + echo 'Running command: '\''nova-compute'\''' Feb 20 04:23:59 localhost nova_compute[228847]: + umask 0022 Feb 20 04:23:59 localhost nova_compute[228847]: + exec nova-compute Feb 20 04:24:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15941 DF PROTO=TCP SPT=38816 DPT=9100 SEQ=1980053403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B009BCD0000000001030307) Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.084 228851 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.085 228851 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.085 228851 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.085 228851 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.194 228851 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.216 228851 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.216 228851 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Feb 20 04:24:01 localhost python3.9[228969]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.631 228851 INFO nova.virt.driver [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.746 228851 INFO nova.compute.provider_config [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.762 228851 WARNING nova.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.762 228851 DEBUG oslo_concurrency.lockutils [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.762 228851 DEBUG oslo_concurrency.lockutils [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.762 228851 DEBUG oslo_concurrency.lockutils [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.763 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.763 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.763 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.763 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.763 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.764 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.764 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.764 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.764 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.764 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.764 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.764 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.765 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.765 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.765 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.765 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.765 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.765 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.765 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.766 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] console_host = np0005625202.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.766 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.766 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.766 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.766 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.767 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.767 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.767 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.767 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.767 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.767 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.768 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.768 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.768 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.768 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.768 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.768 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.768 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.769 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.769 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] host = np0005625202.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.769 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.769 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.769 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.769 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.770 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.770 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.770 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.770 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.770 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.770 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.770 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.771 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.771 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.771 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.771 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.771 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.771 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.772 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.772 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.772 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.772 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.772 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.772 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.772 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.773 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.773 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.773 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.773 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.773 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.773 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.773 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.774 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.774 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.774 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.774 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.774 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.774 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.774 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.774 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.775 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.775 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.775 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.775 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.775 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.775 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.775 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.776 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.776 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.776 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.776 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.776 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.776 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.776 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.776 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.777 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.777 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.777 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.777 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.777 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.777 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.778 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.778 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.778 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.778 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.778 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.778 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.778 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.778 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.779 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.779 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.779 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.779 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.779 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.779 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.779 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.780 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.780 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.780 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.780 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.780 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.780 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.780 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.780 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.781 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.781 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.781 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.781 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.781 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.781 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.781 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.782 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.782 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.782 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.782 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.782 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.782 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.782 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.782 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.783 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.783 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.783 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.783 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.783 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.783 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.783 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.783 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.784 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.784 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.784 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.784 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.784 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.784 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.784 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.785 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.785 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.785 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.785 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.785 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.785 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.785 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.786 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.786 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.786 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.786 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.786 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.786 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.786 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.786 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.787 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.787 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.787 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.787 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.787 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.787 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.787 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.788 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.788 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.788 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.788 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.788 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.788 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.788 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.789 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.789 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.789 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.789 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.789 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.789 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.789 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.789 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.790 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.790 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.790 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.790 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.790 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.790 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.790 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.791 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.791 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.791 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.791 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.791 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.791 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.791 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.791 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.792 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.792 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.792 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.792 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.792 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.792 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.792 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.793 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.793 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.793 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.793 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.793 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.793 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.793 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.794 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.794 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.794 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.794 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.794 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.os_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.794 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.794 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.795 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.795 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.795 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.795 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.795 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.795 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.795 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.795 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.796 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.796 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.796 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.796 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.796 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.796 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.796 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.797 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.797 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.797 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.797 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.797 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.797 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.797 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.797 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.798 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.798 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.798 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.798 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.798 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.798 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.798 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.798 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.799 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.799 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.799 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.799 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.799 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.799 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.799 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.800 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.800 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.800 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.800 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.800 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.800 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.800 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.800 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.801 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.801 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.801 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.801 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.801 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.801 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.801 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.802 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.802 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.802 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.802 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.802 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.802 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.802 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.803 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.803 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.803 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.803 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.803 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.803 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.803 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.803 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.804 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.804 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.804 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.804 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.804 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.804 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.804 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.805 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.805 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.805 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.805 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.805 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.805 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.805 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.806 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.806 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.806 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.806 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.806 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.806 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.806 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.806 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.807 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.807 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.807 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.807 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.807 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.807 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.807 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.807 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.808 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.808 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.808 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.808 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.808 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.808 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.808 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.809 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.809 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.809 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.809 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.809 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.809 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.809 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.810 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.810 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.810 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.810 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.810 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.810 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.810 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.810 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.811 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.811 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.811 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.811 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.811 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.811 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.811 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.811 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.812 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.812 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.812 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.812 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.812 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.812 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.813 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.813 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.813 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.813 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.813 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.813 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.813 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.814 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.814 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.814 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.814 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.814 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.814 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.814 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.814 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.815 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.815 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.815 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.815 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.815 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.815 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.815 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.816 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.816 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.816 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.816 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.816 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.816 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.816 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.816 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.817 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.817 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.817 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.817 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.817 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.817 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.barbican_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.817 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.818 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.818 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.818 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.818 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.818 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.818 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.818 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.818 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.819 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.819 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.819 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.819 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.819 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.819 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.819 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.820 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.820 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.820 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.820 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.820 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.820 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.820 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.821 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.821 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.821 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.821 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.821 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.821 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.821 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.821 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.822 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.822 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.822 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.822 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.822 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.822 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.822 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.823 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.823 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.823 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.823 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.823 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.823 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.823 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.823 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.824 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.824 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.824 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.824 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.824 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.824 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.824 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.824 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.825 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.825 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.825 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.825 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.825 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.825 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.825 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.826 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.826 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.826 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.826 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.826 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.826 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.826 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.827 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.827 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.827 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.827 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.827 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.827 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.827 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.827 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.828 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.828 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.828 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.828 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.828 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.828 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.828 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.828 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.829 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.829 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.829 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.829 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.829 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.829 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.829 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.830 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.830 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.830 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.830 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.830 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.830 228851 WARNING oslo_config.cfg [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Feb 20 04:24:01 localhost nova_compute[228847]: live_migration_uri is deprecated for removal in favor of two other options that Feb 20 04:24:01 localhost nova_compute[228847]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Feb 20 04:24:01 localhost nova_compute[228847]: and ``live_migration_inbound_addr`` respectively. Feb 20 04:24:01 localhost nova_compute[228847]: ). Its value may be silently ignored in the future.#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.830 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.831 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.831 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.831 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.831 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.831 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.831 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.832 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.832 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.832 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.832 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.832 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.832 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.832 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.833 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.833 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.833 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.833 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.833 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.rbd_secret_uuid = a8557ee9-b55d-5519-942c-cf8f6172f1d8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.833 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.833 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.833 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.834 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.834 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.834 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.834 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.834 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.834 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.834 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.835 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.835 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.835 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.835 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.835 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.835 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.835 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.836 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.836 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.836 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.836 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.836 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.836 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.836 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.836 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.837 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.837 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.837 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.837 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.837 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.837 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.837 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.838 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.838 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.838 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.838 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.838 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.838 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.838 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.839 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.839 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.839 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.839 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.839 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.839 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.839 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.839 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.840 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.840 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.840 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.840 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.840 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.840 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.840 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.841 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.841 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.841 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.841 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.841 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.841 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.841 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.841 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.842 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.842 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.842 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.842 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.842 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.842 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.842 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.843 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.843 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.843 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.843 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.843 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.843 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.843 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.843 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.844 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.844 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.844 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.844 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.844 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.844 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.844 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.844 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.845 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.845 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.845 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.845 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.845 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.845 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.845 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.846 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.846 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.846 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.846 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.846 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.846 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.846 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.846 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.847 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.847 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.847 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.847 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.847 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.847 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.847 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.848 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.848 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.848 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.848 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.848 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.848 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.848 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.848 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.849 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.849 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.849 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.849 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.849 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.849 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.850 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.850 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.850 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.850 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.850 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.850 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.850 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.851 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.851 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.851 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.851 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.851 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.851 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.851 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.851 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.852 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.852 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.852 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.852 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.852 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.852 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.852 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.853 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.853 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.853 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.853 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.853 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.853 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.853 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.853 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.854 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.854 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.854 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.854 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.854 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.854 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.854 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.855 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.855 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.855 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.855 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.855 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.855 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.855 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.856 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.856 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.856 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.856 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.856 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.856 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.856 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.856 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.857 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.857 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.857 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.857 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.857 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.857 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.857 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.858 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.858 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.858 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.858 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.858 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.858 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.858 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.859 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.859 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.859 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.859 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.859 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.859 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.859 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.859 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.860 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.860 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.860 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.860 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.860 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.860 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.860 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.860 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.861 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.861 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.861 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.861 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.861 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.861 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.861 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.861 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.862 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.862 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.862 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.862 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.862 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.862 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.862 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.862 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.863 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.863 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.863 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.863 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.863 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.863 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.863 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.864 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.864 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.864 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.864 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.864 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.864 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.864 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.865 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.865 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.865 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.865 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.865 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.865 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.865 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.866 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.866 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.866 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.866 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.866 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.866 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.866 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.867 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.867 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.867 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.867 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.867 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.867 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.867 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.867 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.868 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.868 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.868 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.868 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.868 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.868 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.868 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.869 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.869 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.869 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.869 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.869 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.869 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.869 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.870 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.870 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.870 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.870 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.870 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.870 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.870 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.871 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.871 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.871 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.871 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.871 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.871 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.871 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.871 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.872 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.872 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.872 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.872 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.872 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.872 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.872 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.873 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.873 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.873 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.873 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.873 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.873 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.873 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.873 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.874 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.874 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.874 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.874 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.874 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.874 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.874 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.875 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.875 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.875 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.875 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.875 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.875 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.875 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.875 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.876 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.876 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.876 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.876 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.876 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.876 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.876 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.876 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.877 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.877 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.877 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.877 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.877 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.877 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.877 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.878 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.878 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.878 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.878 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.878 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.878 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.878 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.878 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.879 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.879 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.879 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.879 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.879 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.879 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.879 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.879 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.880 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.880 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.880 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.880 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.880 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.880 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.880 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.881 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.881 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.881 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.881 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.881 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.881 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.881 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.881 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.882 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.882 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.882 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.882 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.882 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.882 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.882 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.883 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.883 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.883 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.883 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.883 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.883 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.883 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.884 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.884 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.884 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.884 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.884 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.884 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.884 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.885 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.885 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.885 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.885 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.885 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.885 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.885 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.886 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.886 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.886 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.886 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.886 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.886 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.886 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.886 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.887 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.887 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.887 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.887 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.887 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.887 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.887 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.887 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.888 228851 DEBUG oslo_service.service [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.889 228851 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.900 228851 INFO nova.virt.node [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Determined node identity 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from /var/lib/nova/compute_id#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.900 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.901 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.901 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.901 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Feb 20 04:24:01 localhost systemd[1]: Started libvirt QEMU daemon. Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.971 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.974 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.975 228851 INFO nova.virt.libvirt.driver [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Connection event '1' reason 'None'#033[00m Feb 20 04:24:01 localhost nova_compute[228847]: 2026-02-20 09:24:01.985 228851 DEBUG nova.virt.libvirt.volume.mount [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Feb 20 04:24:02 localhost python3.9[229134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:24:02 localhost nova_compute[228847]: 2026-02-20 09:24:02.879 228851 INFO nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Libvirt host capabilities Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: 61530aa3-6295-40fa-9f19-edfd227b2bca Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: x86_64 Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Rome-v4 Feb 20 04:24:02 localhost nova_compute[228847]: AMD Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: tcp Feb 20 04:24:02 localhost nova_compute[228847]: rdma Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: 16116612 Feb 20 04:24:02 localhost nova_compute[228847]: 4029153 Feb 20 04:24:02 localhost nova_compute[228847]: 0 Feb 20 04:24:02 localhost nova_compute[228847]: 0 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: selinux Feb 20 04:24:02 localhost nova_compute[228847]: 0 Feb 20 04:24:02 localhost nova_compute[228847]: system_u:system_r:svirt_t:s0 Feb 20 04:24:02 localhost nova_compute[228847]: system_u:system_r:svirt_tcg_t:s0 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: dac Feb 20 04:24:02 localhost nova_compute[228847]: 0 Feb 20 04:24:02 localhost nova_compute[228847]: +107:+107 Feb 20 04:24:02 localhost nova_compute[228847]: +107:+107 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: hvm Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: 32 Feb 20 04:24:02 localhost nova_compute[228847]: /usr/libexec/qemu-kvm Feb 20 04:24:02 localhost nova_compute[228847]: pc-i440fx-rhel7.6.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel9.8.0 Feb 20 04:24:02 localhost nova_compute[228847]: q35 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel9.6.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.6.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel9.4.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.5.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.3.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel7.6.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.4.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel9.2.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.2.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel9.0.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.0.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.1.0 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: hvm Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: 64 Feb 20 04:24:02 localhost nova_compute[228847]: /usr/libexec/qemu-kvm Feb 20 04:24:02 localhost nova_compute[228847]: pc-i440fx-rhel7.6.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel9.8.0 Feb 20 04:24:02 localhost nova_compute[228847]: q35 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel9.6.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.6.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel9.4.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.5.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.3.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel7.6.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.4.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel9.2.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.2.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel9.0.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.0.0 Feb 20 04:24:02 localhost nova_compute[228847]: pc-q35-rhel8.1.0 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: #033[00m Feb 20 04:24:02 localhost nova_compute[228847]: 2026-02-20 09:24:02.890 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 20 04:24:02 localhost nova_compute[228847]: 2026-02-20 09:24:02.911 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: /usr/libexec/qemu-kvm Feb 20 04:24:02 localhost nova_compute[228847]: kvm Feb 20 04:24:02 localhost nova_compute[228847]: pc-i440fx-rhel7.6.0 Feb 20 04:24:02 localhost nova_compute[228847]: i686 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: rom Feb 20 04:24:02 localhost nova_compute[228847]: pflash Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: yes Feb 20 04:24:02 localhost nova_compute[228847]: no Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: no Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: on Feb 20 04:24:02 localhost nova_compute[228847]: off Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: on Feb 20 04:24:02 localhost nova_compute[228847]: off Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Rome Feb 20 04:24:02 localhost nova_compute[228847]: AMD Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: 486 Feb 20 04:24:02 localhost nova_compute[228847]: 486-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Broadwell Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Broadwell-IBRS Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Broadwell-noTSX Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Broadwell-noTSX-IBRS Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Broadwell-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Broadwell-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Broadwell-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Broadwell-v4 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Cascadelake-Server Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Cascadelake-Server-noTSX Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Cascadelake-Server-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Cascadelake-Server-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Cascadelake-Server-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Cascadelake-Server-v4 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Cascadelake-Server-v5 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: ClearwaterForest Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: ClearwaterForest-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Conroe Feb 20 04:24:02 localhost nova_compute[228847]: Conroe-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Cooperlake Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Cooperlake-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Cooperlake-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Denverton Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Denverton-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Denverton-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Denverton-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Dhyana Feb 20 04:24:02 localhost nova_compute[228847]: Dhyana-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Dhyana-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Genoa Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Genoa-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Genoa-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-IBPB Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Milan Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Milan-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Milan-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Milan-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Rome Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Rome-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Rome-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Rome-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Rome-v4 Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Rome-v5 Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Turin Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-Turin-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-v1 Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-v2 Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-v4 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: EPYC-v5 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: GraniteRapids Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: GraniteRapids-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: GraniteRapids-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: GraniteRapids-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Haswell Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Haswell-IBRS Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Haswell-noTSX Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Haswell-noTSX-IBRS Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Haswell-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Haswell-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Haswell-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Haswell-v4 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Icelake-Server Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Icelake-Server-noTSX Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Icelake-Server-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Icelake-Server-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Icelake-Server-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Icelake-Server-v4 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Icelake-Server-v5 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Icelake-Server-v6 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Icelake-Server-v7 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: IvyBridge Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: IvyBridge-IBRS Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: IvyBridge-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: IvyBridge-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: KnightsMill Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: KnightsMill-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Nehalem Feb 20 04:24:02 localhost nova_compute[228847]: Nehalem-IBRS Feb 20 04:24:02 localhost nova_compute[228847]: Nehalem-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Nehalem-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Opteron_G1 Feb 20 04:24:02 localhost nova_compute[228847]: Opteron_G1-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Opteron_G2 Feb 20 04:24:02 localhost nova_compute[228847]: Opteron_G2-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Opteron_G3 Feb 20 04:24:02 localhost nova_compute[228847]: Opteron_G3-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Opteron_G4 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Opteron_G4-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Opteron_G5 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Opteron_G5-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Penryn Feb 20 04:24:02 localhost nova_compute[228847]: Penryn-v1 Feb 20 04:24:02 localhost nova_compute[228847]: SandyBridge Feb 20 04:24:02 localhost nova_compute[228847]: SandyBridge-IBRS Feb 20 04:24:02 localhost nova_compute[228847]: SandyBridge-v1 Feb 20 04:24:02 localhost nova_compute[228847]: SandyBridge-v2 Feb 20 04:24:02 localhost nova_compute[228847]: SapphireRapids Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: SapphireRapids-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: SapphireRapids-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost python3.9[229232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579441.918901-3729-91687421747049/.source.yaml _original_basename=.6dby_k3h follow=False checksum=a8e9a640ed2d11815875c8a03dd8e15172eb268a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: SapphireRapids-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: SapphireRapids-v4 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: SierraForest Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: SierraForest-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: SierraForest-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: SierraForest-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Client Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Client-IBRS Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Client-noTSX-IBRS Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Client-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Client-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Client-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Client-v4 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Server Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Server-IBRS Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Server-noTSX-IBRS Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Server-v1 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Server-v2 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Server-v3 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Skylake-Server-v4 Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:02 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Westmere Feb 20 04:24:03 localhost nova_compute[228847]: Westmere-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Westmere-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Westmere-v2 Feb 20 04:24:03 localhost nova_compute[228847]: athlon Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: athlon-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: core2duo Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: core2duo-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: coreduo Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: coreduo-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: kvm32 Feb 20 04:24:03 localhost nova_compute[228847]: kvm32-v1 Feb 20 04:24:03 localhost nova_compute[228847]: kvm64 Feb 20 04:24:03 localhost nova_compute[228847]: kvm64-v1 Feb 20 04:24:03 localhost nova_compute[228847]: n270 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: n270-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: pentium Feb 20 04:24:03 localhost nova_compute[228847]: pentium-v1 Feb 20 04:24:03 localhost nova_compute[228847]: pentium2 Feb 20 04:24:03 localhost nova_compute[228847]: pentium2-v1 Feb 20 04:24:03 localhost nova_compute[228847]: pentium3 Feb 20 04:24:03 localhost nova_compute[228847]: pentium3-v1 Feb 20 04:24:03 localhost nova_compute[228847]: phenom Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: phenom-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: qemu32 Feb 20 04:24:03 localhost nova_compute[228847]: qemu32-v1 Feb 20 04:24:03 localhost nova_compute[228847]: qemu64 Feb 20 04:24:03 localhost nova_compute[228847]: qemu64-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: file Feb 20 04:24:03 localhost nova_compute[228847]: anonymous Feb 20 04:24:03 localhost nova_compute[228847]: memfd Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: disk Feb 20 04:24:03 localhost nova_compute[228847]: cdrom Feb 20 04:24:03 localhost nova_compute[228847]: floppy Feb 20 04:24:03 localhost nova_compute[228847]: lun Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: ide Feb 20 04:24:03 localhost nova_compute[228847]: fdc Feb 20 04:24:03 localhost nova_compute[228847]: scsi Feb 20 04:24:03 localhost nova_compute[228847]: virtio Feb 20 04:24:03 localhost nova_compute[228847]: usb Feb 20 04:24:03 localhost nova_compute[228847]: sata Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: virtio Feb 20 04:24:03 localhost nova_compute[228847]: virtio-transitional Feb 20 04:24:03 localhost nova_compute[228847]: virtio-non-transitional Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: vnc Feb 20 04:24:03 localhost nova_compute[228847]: egl-headless Feb 20 04:24:03 localhost nova_compute[228847]: dbus Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: subsystem Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: default Feb 20 04:24:03 localhost nova_compute[228847]: mandatory Feb 20 04:24:03 localhost nova_compute[228847]: requisite Feb 20 04:24:03 localhost nova_compute[228847]: optional Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: usb Feb 20 04:24:03 localhost nova_compute[228847]: pci Feb 20 04:24:03 localhost nova_compute[228847]: scsi Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: virtio Feb 20 04:24:03 localhost nova_compute[228847]: virtio-transitional Feb 20 04:24:03 localhost nova_compute[228847]: virtio-non-transitional Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: random Feb 20 04:24:03 localhost nova_compute[228847]: egd Feb 20 04:24:03 localhost nova_compute[228847]: builtin Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: path Feb 20 04:24:03 localhost nova_compute[228847]: handle Feb 20 04:24:03 localhost nova_compute[228847]: virtiofs Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: tpm-tis Feb 20 04:24:03 localhost nova_compute[228847]: tpm-crb Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: emulator Feb 20 04:24:03 localhost nova_compute[228847]: external Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: 2.0 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: usb Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: pty Feb 20 04:24:03 localhost nova_compute[228847]: unix Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: qemu Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: builtin Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: default Feb 20 04:24:03 localhost nova_compute[228847]: passt Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: isa Feb 20 04:24:03 localhost nova_compute[228847]: hyperv Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: null Feb 20 04:24:03 localhost nova_compute[228847]: vc Feb 20 04:24:03 localhost nova_compute[228847]: pty Feb 20 04:24:03 localhost nova_compute[228847]: dev Feb 20 04:24:03 localhost nova_compute[228847]: file Feb 20 04:24:03 localhost nova_compute[228847]: pipe Feb 20 04:24:03 localhost nova_compute[228847]: stdio Feb 20 04:24:03 localhost nova_compute[228847]: udp Feb 20 04:24:03 localhost nova_compute[228847]: tcp Feb 20 04:24:03 localhost nova_compute[228847]: unix Feb 20 04:24:03 localhost nova_compute[228847]: qemu-vdagent Feb 20 04:24:03 localhost nova_compute[228847]: dbus Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: relaxed Feb 20 04:24:03 localhost nova_compute[228847]: vapic Feb 20 04:24:03 localhost nova_compute[228847]: spinlocks Feb 20 04:24:03 localhost nova_compute[228847]: vpindex Feb 20 04:24:03 localhost nova_compute[228847]: runtime Feb 20 04:24:03 localhost nova_compute[228847]: synic Feb 20 04:24:03 localhost nova_compute[228847]: stimer Feb 20 04:24:03 localhost nova_compute[228847]: reset Feb 20 04:24:03 localhost nova_compute[228847]: vendor_id Feb 20 04:24:03 localhost nova_compute[228847]: frequencies Feb 20 04:24:03 localhost nova_compute[228847]: reenlightenment Feb 20 04:24:03 localhost nova_compute[228847]: tlbflush Feb 20 04:24:03 localhost nova_compute[228847]: ipi Feb 20 04:24:03 localhost nova_compute[228847]: avic Feb 20 04:24:03 localhost nova_compute[228847]: emsr_bitmap Feb 20 04:24:03 localhost nova_compute[228847]: xmm_input Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: 4095 Feb 20 04:24:03 localhost nova_compute[228847]: on Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: Linux KVM Hv Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:02.922 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: /usr/libexec/qemu-kvm Feb 20 04:24:03 localhost nova_compute[228847]: kvm Feb 20 04:24:03 localhost nova_compute[228847]: pc-q35-rhel9.8.0 Feb 20 04:24:03 localhost nova_compute[228847]: i686 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: rom Feb 20 04:24:03 localhost nova_compute[228847]: pflash Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: yes Feb 20 04:24:03 localhost nova_compute[228847]: no Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: no Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: on Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: on Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome Feb 20 04:24:03 localhost nova_compute[228847]: AMD Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: 486 Feb 20 04:24:03 localhost nova_compute[228847]: 486-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-noTSX Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-noTSX-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-noTSX Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: ClearwaterForest Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: ClearwaterForest-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Conroe Feb 20 04:24:03 localhost nova_compute[228847]: Conroe-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Cooperlake Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cooperlake-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cooperlake-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Denverton Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Denverton-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Denverton-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Denverton-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Dhyana Feb 20 04:24:03 localhost nova_compute[228847]: Dhyana-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Dhyana-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Genoa Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Genoa-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Genoa-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-IBPB Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Milan Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Milan-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Milan-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Milan-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v4 Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v5 Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Turin Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Turin-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v1 Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v2 Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: GraniteRapids Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: GraniteRapids-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: GraniteRapids-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: GraniteRapids-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-noTSX Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-noTSX-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-noTSX Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v6 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v7 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: IvyBridge Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: IvyBridge-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: IvyBridge-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: IvyBridge-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: KnightsMill Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: KnightsMill-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Nehalem Feb 20 04:24:03 localhost nova_compute[228847]: Nehalem-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Nehalem-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Nehalem-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G1 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G1-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G2 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G2-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G3 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G3-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G4-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G5-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Penryn Feb 20 04:24:03 localhost nova_compute[228847]: Penryn-v1 Feb 20 04:24:03 localhost nova_compute[228847]: SandyBridge Feb 20 04:24:03 localhost nova_compute[228847]: SandyBridge-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: SandyBridge-v1 Feb 20 04:24:03 localhost nova_compute[228847]: SandyBridge-v2 Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SierraForest Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SierraForest-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SierraForest-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SierraForest-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-noTSX-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-noTSX-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Westmere Feb 20 04:24:03 localhost nova_compute[228847]: Westmere-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Westmere-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Westmere-v2 Feb 20 04:24:03 localhost nova_compute[228847]: athlon Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: athlon-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: core2duo Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: core2duo-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: coreduo Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: coreduo-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: kvm32 Feb 20 04:24:03 localhost nova_compute[228847]: kvm32-v1 Feb 20 04:24:03 localhost nova_compute[228847]: kvm64 Feb 20 04:24:03 localhost nova_compute[228847]: kvm64-v1 Feb 20 04:24:03 localhost nova_compute[228847]: n270 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: n270-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: pentium Feb 20 04:24:03 localhost nova_compute[228847]: pentium-v1 Feb 20 04:24:03 localhost nova_compute[228847]: pentium2 Feb 20 04:24:03 localhost nova_compute[228847]: pentium2-v1 Feb 20 04:24:03 localhost nova_compute[228847]: pentium3 Feb 20 04:24:03 localhost nova_compute[228847]: pentium3-v1 Feb 20 04:24:03 localhost nova_compute[228847]: phenom Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: phenom-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: qemu32 Feb 20 04:24:03 localhost nova_compute[228847]: qemu32-v1 Feb 20 04:24:03 localhost nova_compute[228847]: qemu64 Feb 20 04:24:03 localhost nova_compute[228847]: qemu64-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: file Feb 20 04:24:03 localhost nova_compute[228847]: anonymous Feb 20 04:24:03 localhost nova_compute[228847]: memfd Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: disk Feb 20 04:24:03 localhost nova_compute[228847]: cdrom Feb 20 04:24:03 localhost nova_compute[228847]: floppy Feb 20 04:24:03 localhost nova_compute[228847]: lun Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: fdc Feb 20 04:24:03 localhost nova_compute[228847]: scsi Feb 20 04:24:03 localhost nova_compute[228847]: virtio Feb 20 04:24:03 localhost nova_compute[228847]: usb Feb 20 04:24:03 localhost nova_compute[228847]: sata Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: virtio Feb 20 04:24:03 localhost nova_compute[228847]: virtio-transitional Feb 20 04:24:03 localhost nova_compute[228847]: virtio-non-transitional Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: vnc Feb 20 04:24:03 localhost nova_compute[228847]: egl-headless Feb 20 04:24:03 localhost nova_compute[228847]: dbus Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: subsystem Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: default Feb 20 04:24:03 localhost nova_compute[228847]: mandatory Feb 20 04:24:03 localhost nova_compute[228847]: requisite Feb 20 04:24:03 localhost nova_compute[228847]: optional Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: usb Feb 20 04:24:03 localhost nova_compute[228847]: pci Feb 20 04:24:03 localhost nova_compute[228847]: scsi Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: virtio Feb 20 04:24:03 localhost nova_compute[228847]: virtio-transitional Feb 20 04:24:03 localhost nova_compute[228847]: virtio-non-transitional Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: random Feb 20 04:24:03 localhost nova_compute[228847]: egd Feb 20 04:24:03 localhost nova_compute[228847]: builtin Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: path Feb 20 04:24:03 localhost nova_compute[228847]: handle Feb 20 04:24:03 localhost nova_compute[228847]: virtiofs Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: tpm-tis Feb 20 04:24:03 localhost nova_compute[228847]: tpm-crb Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: emulator Feb 20 04:24:03 localhost nova_compute[228847]: external Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: 2.0 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: usb Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: pty Feb 20 04:24:03 localhost nova_compute[228847]: unix Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: qemu Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: builtin Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: default Feb 20 04:24:03 localhost nova_compute[228847]: passt Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: isa Feb 20 04:24:03 localhost nova_compute[228847]: hyperv Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: null Feb 20 04:24:03 localhost nova_compute[228847]: vc Feb 20 04:24:03 localhost nova_compute[228847]: pty Feb 20 04:24:03 localhost nova_compute[228847]: dev Feb 20 04:24:03 localhost nova_compute[228847]: file Feb 20 04:24:03 localhost nova_compute[228847]: pipe Feb 20 04:24:03 localhost nova_compute[228847]: stdio Feb 20 04:24:03 localhost nova_compute[228847]: udp Feb 20 04:24:03 localhost nova_compute[228847]: tcp Feb 20 04:24:03 localhost nova_compute[228847]: unix Feb 20 04:24:03 localhost nova_compute[228847]: qemu-vdagent Feb 20 04:24:03 localhost nova_compute[228847]: dbus Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: relaxed Feb 20 04:24:03 localhost nova_compute[228847]: vapic Feb 20 04:24:03 localhost nova_compute[228847]: spinlocks Feb 20 04:24:03 localhost nova_compute[228847]: vpindex Feb 20 04:24:03 localhost nova_compute[228847]: runtime Feb 20 04:24:03 localhost nova_compute[228847]: synic Feb 20 04:24:03 localhost nova_compute[228847]: stimer Feb 20 04:24:03 localhost nova_compute[228847]: reset Feb 20 04:24:03 localhost nova_compute[228847]: vendor_id Feb 20 04:24:03 localhost nova_compute[228847]: frequencies Feb 20 04:24:03 localhost nova_compute[228847]: reenlightenment Feb 20 04:24:03 localhost nova_compute[228847]: tlbflush Feb 20 04:24:03 localhost nova_compute[228847]: ipi Feb 20 04:24:03 localhost nova_compute[228847]: avic Feb 20 04:24:03 localhost nova_compute[228847]: emsr_bitmap Feb 20 04:24:03 localhost nova_compute[228847]: xmm_input Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: 4095 Feb 20 04:24:03 localhost nova_compute[228847]: on Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: Linux KVM Hv Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:02.995 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:02.999 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: /usr/libexec/qemu-kvm Feb 20 04:24:03 localhost nova_compute[228847]: kvm Feb 20 04:24:03 localhost nova_compute[228847]: pc-i440fx-rhel7.6.0 Feb 20 04:24:03 localhost nova_compute[228847]: x86_64 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: rom Feb 20 04:24:03 localhost nova_compute[228847]: pflash Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: yes Feb 20 04:24:03 localhost nova_compute[228847]: no Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: no Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: on Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: on Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome Feb 20 04:24:03 localhost nova_compute[228847]: AMD Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: 486 Feb 20 04:24:03 localhost nova_compute[228847]: 486-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-noTSX Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-noTSX-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-noTSX Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: ClearwaterForest Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: ClearwaterForest-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Conroe Feb 20 04:24:03 localhost nova_compute[228847]: Conroe-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Cooperlake Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cooperlake-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cooperlake-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Denverton Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Denverton-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Denverton-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Denverton-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Dhyana Feb 20 04:24:03 localhost nova_compute[228847]: Dhyana-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Dhyana-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Genoa Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Genoa-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Genoa-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-IBPB Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Milan Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Milan-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Milan-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Milan-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v4 Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v5 Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Turin Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Turin-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v1 Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v2 Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: GraniteRapids Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: GraniteRapids-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: GraniteRapids-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: GraniteRapids-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-noTSX Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-noTSX-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-noTSX Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v6 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v7 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: IvyBridge Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: IvyBridge-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: IvyBridge-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: IvyBridge-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: KnightsMill Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: KnightsMill-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Nehalem Feb 20 04:24:03 localhost nova_compute[228847]: Nehalem-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Nehalem-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Nehalem-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G1 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G1-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G2 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G2-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G3 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G3-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G4-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G5-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Penryn Feb 20 04:24:03 localhost nova_compute[228847]: Penryn-v1 Feb 20 04:24:03 localhost nova_compute[228847]: SandyBridge Feb 20 04:24:03 localhost nova_compute[228847]: SandyBridge-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: SandyBridge-v1 Feb 20 04:24:03 localhost nova_compute[228847]: SandyBridge-v2 Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SierraForest Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SierraForest-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SierraForest-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SierraForest-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-noTSX-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-noTSX-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Westmere Feb 20 04:24:03 localhost nova_compute[228847]: Westmere-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Westmere-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Westmere-v2 Feb 20 04:24:03 localhost nova_compute[228847]: athlon Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: athlon-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: core2duo Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: core2duo-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: coreduo Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: coreduo-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: kvm32 Feb 20 04:24:03 localhost nova_compute[228847]: kvm32-v1 Feb 20 04:24:03 localhost nova_compute[228847]: kvm64 Feb 20 04:24:03 localhost nova_compute[228847]: kvm64-v1 Feb 20 04:24:03 localhost nova_compute[228847]: n270 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: n270-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: pentium Feb 20 04:24:03 localhost nova_compute[228847]: pentium-v1 Feb 20 04:24:03 localhost nova_compute[228847]: pentium2 Feb 20 04:24:03 localhost nova_compute[228847]: pentium2-v1 Feb 20 04:24:03 localhost nova_compute[228847]: pentium3 Feb 20 04:24:03 localhost nova_compute[228847]: pentium3-v1 Feb 20 04:24:03 localhost nova_compute[228847]: phenom Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: phenom-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: qemu32 Feb 20 04:24:03 localhost nova_compute[228847]: qemu32-v1 Feb 20 04:24:03 localhost nova_compute[228847]: qemu64 Feb 20 04:24:03 localhost nova_compute[228847]: qemu64-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: file Feb 20 04:24:03 localhost nova_compute[228847]: anonymous Feb 20 04:24:03 localhost nova_compute[228847]: memfd Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: disk Feb 20 04:24:03 localhost nova_compute[228847]: cdrom Feb 20 04:24:03 localhost nova_compute[228847]: floppy Feb 20 04:24:03 localhost nova_compute[228847]: lun Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: ide Feb 20 04:24:03 localhost nova_compute[228847]: fdc Feb 20 04:24:03 localhost nova_compute[228847]: scsi Feb 20 04:24:03 localhost nova_compute[228847]: virtio Feb 20 04:24:03 localhost nova_compute[228847]: usb Feb 20 04:24:03 localhost nova_compute[228847]: sata Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: virtio Feb 20 04:24:03 localhost nova_compute[228847]: virtio-transitional Feb 20 04:24:03 localhost nova_compute[228847]: virtio-non-transitional Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: vnc Feb 20 04:24:03 localhost nova_compute[228847]: egl-headless Feb 20 04:24:03 localhost nova_compute[228847]: dbus Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: subsystem Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: default Feb 20 04:24:03 localhost nova_compute[228847]: mandatory Feb 20 04:24:03 localhost nova_compute[228847]: requisite Feb 20 04:24:03 localhost nova_compute[228847]: optional Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: usb Feb 20 04:24:03 localhost nova_compute[228847]: pci Feb 20 04:24:03 localhost nova_compute[228847]: scsi Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: virtio Feb 20 04:24:03 localhost nova_compute[228847]: virtio-transitional Feb 20 04:24:03 localhost nova_compute[228847]: virtio-non-transitional Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: random Feb 20 04:24:03 localhost nova_compute[228847]: egd Feb 20 04:24:03 localhost nova_compute[228847]: builtin Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: path Feb 20 04:24:03 localhost nova_compute[228847]: handle Feb 20 04:24:03 localhost nova_compute[228847]: virtiofs Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: tpm-tis Feb 20 04:24:03 localhost nova_compute[228847]: tpm-crb Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: emulator Feb 20 04:24:03 localhost nova_compute[228847]: external Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: 2.0 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: usb Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: pty Feb 20 04:24:03 localhost nova_compute[228847]: unix Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: qemu Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: builtin Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: default Feb 20 04:24:03 localhost nova_compute[228847]: passt Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: isa Feb 20 04:24:03 localhost nova_compute[228847]: hyperv Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: null Feb 20 04:24:03 localhost nova_compute[228847]: vc Feb 20 04:24:03 localhost nova_compute[228847]: pty Feb 20 04:24:03 localhost nova_compute[228847]: dev Feb 20 04:24:03 localhost nova_compute[228847]: file Feb 20 04:24:03 localhost nova_compute[228847]: pipe Feb 20 04:24:03 localhost nova_compute[228847]: stdio Feb 20 04:24:03 localhost nova_compute[228847]: udp Feb 20 04:24:03 localhost nova_compute[228847]: tcp Feb 20 04:24:03 localhost nova_compute[228847]: unix Feb 20 04:24:03 localhost nova_compute[228847]: qemu-vdagent Feb 20 04:24:03 localhost nova_compute[228847]: dbus Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: relaxed Feb 20 04:24:03 localhost nova_compute[228847]: vapic Feb 20 04:24:03 localhost nova_compute[228847]: spinlocks Feb 20 04:24:03 localhost nova_compute[228847]: vpindex Feb 20 04:24:03 localhost nova_compute[228847]: runtime Feb 20 04:24:03 localhost nova_compute[228847]: synic Feb 20 04:24:03 localhost nova_compute[228847]: stimer Feb 20 04:24:03 localhost nova_compute[228847]: reset Feb 20 04:24:03 localhost nova_compute[228847]: vendor_id Feb 20 04:24:03 localhost nova_compute[228847]: frequencies Feb 20 04:24:03 localhost nova_compute[228847]: reenlightenment Feb 20 04:24:03 localhost nova_compute[228847]: tlbflush Feb 20 04:24:03 localhost nova_compute[228847]: ipi Feb 20 04:24:03 localhost nova_compute[228847]: avic Feb 20 04:24:03 localhost nova_compute[228847]: emsr_bitmap Feb 20 04:24:03 localhost nova_compute[228847]: xmm_input Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: 4095 Feb 20 04:24:03 localhost nova_compute[228847]: on Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: Linux KVM Hv Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.068 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: /usr/libexec/qemu-kvm Feb 20 04:24:03 localhost nova_compute[228847]: kvm Feb 20 04:24:03 localhost nova_compute[228847]: pc-q35-rhel9.8.0 Feb 20 04:24:03 localhost nova_compute[228847]: x86_64 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: efi Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Feb 20 04:24:03 localhost nova_compute[228847]: /usr/share/edk2/ovmf/OVMF_CODE.fd Feb 20 04:24:03 localhost nova_compute[228847]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Feb 20 04:24:03 localhost nova_compute[228847]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: rom Feb 20 04:24:03 localhost nova_compute[228847]: pflash Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: yes Feb 20 04:24:03 localhost nova_compute[228847]: no Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: yes Feb 20 04:24:03 localhost nova_compute[228847]: no Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: on Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: on Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome Feb 20 04:24:03 localhost nova_compute[228847]: AMD Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: 486 Feb 20 04:24:03 localhost nova_compute[228847]: 486-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-noTSX Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-noTSX-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Broadwell-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-noTSX Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cascadelake-Server-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: ClearwaterForest Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: ClearwaterForest-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Conroe Feb 20 04:24:03 localhost nova_compute[228847]: Conroe-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Cooperlake Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cooperlake-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Cooperlake-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Denverton Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Denverton-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Denverton-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Denverton-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Dhyana Feb 20 04:24:03 localhost nova_compute[228847]: Dhyana-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Dhyana-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Genoa Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Genoa-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Genoa-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-IBPB Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Milan Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Milan-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Milan-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Milan-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v4 Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Rome-v5 Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Turin Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-Turin-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v1 Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v2 Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: EPYC-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: GraniteRapids Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: GraniteRapids-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: GraniteRapids-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: GraniteRapids-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-noTSX Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-noTSX-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Haswell-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-noTSX Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v6 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Icelake-Server-v7 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: IvyBridge Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: IvyBridge-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: IvyBridge-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: IvyBridge-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: KnightsMill Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: KnightsMill-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Nehalem Feb 20 04:24:03 localhost nova_compute[228847]: Nehalem-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Nehalem-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Nehalem-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G1 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G1-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G2 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G2-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G3 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G3-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G4-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Opteron_G5-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Penryn Feb 20 04:24:03 localhost nova_compute[228847]: Penryn-v1 Feb 20 04:24:03 localhost nova_compute[228847]: SandyBridge Feb 20 04:24:03 localhost nova_compute[228847]: SandyBridge-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: SandyBridge-v1 Feb 20 04:24:03 localhost nova_compute[228847]: SandyBridge-v2 Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SapphireRapids-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SierraForest Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SierraForest-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SierraForest-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: SierraForest-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-noTSX-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Client-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-noTSX-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Skylake-Server-v5 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v2 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v3 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Snowridge-v4 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Westmere Feb 20 04:24:03 localhost nova_compute[228847]: Westmere-IBRS Feb 20 04:24:03 localhost nova_compute[228847]: Westmere-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Westmere-v2 Feb 20 04:24:03 localhost nova_compute[228847]: athlon Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: athlon-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: core2duo Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: core2duo-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: coreduo Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: coreduo-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: kvm32 Feb 20 04:24:03 localhost nova_compute[228847]: kvm32-v1 Feb 20 04:24:03 localhost nova_compute[228847]: kvm64 Feb 20 04:24:03 localhost nova_compute[228847]: kvm64-v1 Feb 20 04:24:03 localhost nova_compute[228847]: n270 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: n270-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: pentium Feb 20 04:24:03 localhost nova_compute[228847]: pentium-v1 Feb 20 04:24:03 localhost nova_compute[228847]: pentium2 Feb 20 04:24:03 localhost nova_compute[228847]: pentium2-v1 Feb 20 04:24:03 localhost nova_compute[228847]: pentium3 Feb 20 04:24:03 localhost nova_compute[228847]: pentium3-v1 Feb 20 04:24:03 localhost nova_compute[228847]: phenom Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: phenom-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: qemu32 Feb 20 04:24:03 localhost nova_compute[228847]: qemu32-v1 Feb 20 04:24:03 localhost nova_compute[228847]: qemu64 Feb 20 04:24:03 localhost nova_compute[228847]: qemu64-v1 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: file Feb 20 04:24:03 localhost nova_compute[228847]: anonymous Feb 20 04:24:03 localhost nova_compute[228847]: memfd Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: disk Feb 20 04:24:03 localhost nova_compute[228847]: cdrom Feb 20 04:24:03 localhost nova_compute[228847]: floppy Feb 20 04:24:03 localhost nova_compute[228847]: lun Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: fdc Feb 20 04:24:03 localhost nova_compute[228847]: scsi Feb 20 04:24:03 localhost nova_compute[228847]: virtio Feb 20 04:24:03 localhost nova_compute[228847]: usb Feb 20 04:24:03 localhost nova_compute[228847]: sata Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: virtio Feb 20 04:24:03 localhost nova_compute[228847]: virtio-transitional Feb 20 04:24:03 localhost nova_compute[228847]: virtio-non-transitional Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: vnc Feb 20 04:24:03 localhost nova_compute[228847]: egl-headless Feb 20 04:24:03 localhost nova_compute[228847]: dbus Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: subsystem Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: default Feb 20 04:24:03 localhost nova_compute[228847]: mandatory Feb 20 04:24:03 localhost nova_compute[228847]: requisite Feb 20 04:24:03 localhost nova_compute[228847]: optional Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: usb Feb 20 04:24:03 localhost nova_compute[228847]: pci Feb 20 04:24:03 localhost nova_compute[228847]: scsi Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: virtio Feb 20 04:24:03 localhost nova_compute[228847]: virtio-transitional Feb 20 04:24:03 localhost nova_compute[228847]: virtio-non-transitional Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: random Feb 20 04:24:03 localhost nova_compute[228847]: egd Feb 20 04:24:03 localhost nova_compute[228847]: builtin Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: path Feb 20 04:24:03 localhost nova_compute[228847]: handle Feb 20 04:24:03 localhost nova_compute[228847]: virtiofs Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: tpm-tis Feb 20 04:24:03 localhost nova_compute[228847]: tpm-crb Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: emulator Feb 20 04:24:03 localhost nova_compute[228847]: external Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: 2.0 Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: usb Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: pty Feb 20 04:24:03 localhost nova_compute[228847]: unix Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: qemu Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: builtin Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: default Feb 20 04:24:03 localhost nova_compute[228847]: passt Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: isa Feb 20 04:24:03 localhost nova_compute[228847]: hyperv Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: null Feb 20 04:24:03 localhost nova_compute[228847]: vc Feb 20 04:24:03 localhost nova_compute[228847]: pty Feb 20 04:24:03 localhost nova_compute[228847]: dev Feb 20 04:24:03 localhost nova_compute[228847]: file Feb 20 04:24:03 localhost nova_compute[228847]: pipe Feb 20 04:24:03 localhost nova_compute[228847]: stdio Feb 20 04:24:03 localhost nova_compute[228847]: udp Feb 20 04:24:03 localhost nova_compute[228847]: tcp Feb 20 04:24:03 localhost nova_compute[228847]: unix Feb 20 04:24:03 localhost nova_compute[228847]: qemu-vdagent Feb 20 04:24:03 localhost nova_compute[228847]: dbus Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: relaxed Feb 20 04:24:03 localhost nova_compute[228847]: vapic Feb 20 04:24:03 localhost nova_compute[228847]: spinlocks Feb 20 04:24:03 localhost nova_compute[228847]: vpindex Feb 20 04:24:03 localhost nova_compute[228847]: runtime Feb 20 04:24:03 localhost nova_compute[228847]: synic Feb 20 04:24:03 localhost nova_compute[228847]: stimer Feb 20 04:24:03 localhost nova_compute[228847]: reset Feb 20 04:24:03 localhost nova_compute[228847]: vendor_id Feb 20 04:24:03 localhost nova_compute[228847]: frequencies Feb 20 04:24:03 localhost nova_compute[228847]: reenlightenment Feb 20 04:24:03 localhost nova_compute[228847]: tlbflush Feb 20 04:24:03 localhost nova_compute[228847]: ipi Feb 20 04:24:03 localhost nova_compute[228847]: avic Feb 20 04:24:03 localhost nova_compute[228847]: emsr_bitmap Feb 20 04:24:03 localhost nova_compute[228847]: xmm_input Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: 4095 Feb 20 04:24:03 localhost nova_compute[228847]: on Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: off Feb 20 04:24:03 localhost nova_compute[228847]: Linux KVM Hv Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: Feb 20 04:24:03 localhost nova_compute[228847]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.132 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.133 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.138 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.138 228851 INFO nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Secure Boot support detected#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.140 228851 INFO nova.virt.libvirt.driver [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.140 228851 INFO nova.virt.libvirt.driver [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.149 228851 DEBUG nova.virt.libvirt.driver [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.171 228851 INFO nova.virt.node [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Determined node identity 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from /var/lib/nova/compute_id#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.185 228851 DEBUG nova.compute.manager [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Verified node 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 matches my host np0005625202.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.207 228851 INFO nova.compute.manager [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.667 228851 INFO nova.service [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Updating service version for nova-compute on np0005625202.localdomain from 57 to 66#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.757 228851 DEBUG oslo_concurrency.lockutils [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.758 228851 DEBUG oslo_concurrency.lockutils [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.758 228851 DEBUG oslo_concurrency.lockutils [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.758 228851 DEBUG nova.compute.resource_tracker [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:24:03 localhost nova_compute[228847]: 2026-02-20 09:24:03.759 228851 DEBUG oslo_concurrency.processutils [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:24:04 localhost python3.9[229345]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.206 228851 DEBUG oslo_concurrency.processutils [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:24:04 localhost systemd[1]: Started libvirt nodedev daemon. Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.555 228851 WARNING nova.virt.libvirt.driver [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.557 228851 DEBUG nova.compute.resource_tracker [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=13603MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.557 228851 DEBUG oslo_concurrency.lockutils [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.558 228851 DEBUG oslo_concurrency.lockutils [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.748 228851 DEBUG nova.compute.resource_tracker [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.748 228851 DEBUG nova.compute.resource_tracker [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.830 228851 DEBUG nova.scheduler.client.report [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Refreshing inventories for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.904 228851 DEBUG nova.scheduler.client.report [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Updating ProviderTree inventory for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.905 228851 DEBUG nova.compute.provider_tree [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Updating inventory in ProviderTree for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.925 228851 DEBUG nova.scheduler.client.report [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Refreshing aggregate associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.948 228851 DEBUG nova.scheduler.client.report [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Refreshing trait associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, traits: COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_ACCELERATORS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,HW_CPU_X86_SSE,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_LAN9118,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 20 04:24:04 localhost nova_compute[228847]: 2026-02-20 09:24:04.961 228851 DEBUG oslo_concurrency.processutils [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:24:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15942 DF PROTO=TCP SPT=38816 DPT=9100 SEQ=1980053403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B00AB8E0000000001030307) Feb 20 04:24:05 localhost python3.9[229517]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:24:05 localhost nova_compute[228847]: 2026-02-20 09:24:05.373 228851 DEBUG oslo_concurrency.processutils [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:24:05 localhost nova_compute[228847]: 2026-02-20 09:24:05.408 228851 DEBUG nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Feb 20 04:24:05 localhost nova_compute[228847]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Feb 20 04:24:05 localhost nova_compute[228847]: 2026-02-20 09:24:05.408 228851 INFO nova.virt.libvirt.host [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] kernel doesn't support AMD SEV#033[00m Feb 20 04:24:05 localhost nova_compute[228847]: 2026-02-20 09:24:05.410 228851 DEBUG nova.compute.provider_tree [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:24:05 localhost nova_compute[228847]: 2026-02-20 09:24:05.411 228851 DEBUG nova.virt.libvirt.driver [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 20 04:24:05 localhost nova_compute[228847]: 2026-02-20 09:24:05.434 228851 DEBUG nova.scheduler.client.report [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:24:05 localhost nova_compute[228847]: 2026-02-20 09:24:05.525 228851 DEBUG nova.compute.provider_tree [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Updating resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Feb 20 04:24:05 localhost nova_compute[228847]: 2026-02-20 09:24:05.562 228851 DEBUG nova.compute.resource_tracker [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:24:05 localhost nova_compute[228847]: 2026-02-20 09:24:05.563 228851 DEBUG oslo_concurrency.lockutils [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:24:05 localhost nova_compute[228847]: 2026-02-20 09:24:05.563 228851 DEBUG nova.service [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Feb 20 04:24:05 localhost nova_compute[228847]: 2026-02-20 09:24:05.633 228851 DEBUG nova.service [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Feb 20 04:24:05 localhost nova_compute[228847]: 2026-02-20 09:24:05.634 228851 DEBUG nova.servicegroup.drivers.db [None req-be188655-1a9b-458c-b0cd-289978f5c105 - - - - - -] DB_Driver: join new ServiceGroup member np0005625202.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Feb 20 04:24:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:24:05.887 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:24:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:24:05.888 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:24:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:24:05.888 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:24:06 localhost python3.9[229627]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:24:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37677 DF PROTO=TCP SPT=55010 DPT=9101 SEQ=342351238 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B00B60D0000000001030307) Feb 20 04:24:07 localhost python3.9[229737]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Feb 20 04:24:08 localhost systemd-journald[48906]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 121.9 (406 of 333 items), suggesting rotation. Feb 20 04:24:08 localhost systemd-journald[48906]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 20 04:24:08 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 04:24:08 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 04:24:08 localhost python3.9[229871]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:24:08 localhost systemd[1]: Stopping nova_compute container... Feb 20 04:24:08 localhost systemd[1]: tmp-crun.Pm9VXL.mount: Deactivated successfully. Feb 20 04:24:09 localhost nova_compute[228847]: 2026-02-20 09:24:09.759 228851 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Feb 20 04:24:09 localhost nova_compute[228847]: 2026-02-20 09:24:09.761 228851 DEBUG oslo_concurrency.lockutils [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:24:09 localhost nova_compute[228847]: 2026-02-20 09:24:09.762 228851 DEBUG oslo_concurrency.lockutils [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:24:09 localhost nova_compute[228847]: 2026-02-20 09:24:09.762 228851 DEBUG oslo_concurrency.lockutils [None req-5c4a62de-6a89-429b-ac57-faf332a87377 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:24:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62666 DF PROTO=TCP SPT=44986 DPT=9102 SEQ=2493801217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B00BF0D0000000001030307) Feb 20 04:24:10 localhost journal[229026]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, ) Feb 20 04:24:10 localhost journal[229026]: hostname: np0005625202.localdomain Feb 20 04:24:10 localhost journal[229026]: End of file while reading data: Input/output error Feb 20 04:24:10 localhost systemd[1]: libpod-1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64.scope: Deactivated successfully. Feb 20 04:24:10 localhost systemd[1]: libpod-1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64.scope: Consumed 3.631s CPU time. Feb 20 04:24:10 localhost podman[229875]: 2026-02-20 09:24:10.103534403 +0000 UTC m=+1.198766085 container died 1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Feb 20 04:24:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64-userdata-shm.mount: Deactivated successfully. Feb 20 04:24:10 localhost podman[229875]: 2026-02-20 09:24:10.165578788 +0000 UTC m=+1.260810470 container cleanup 1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:24:10 localhost podman[229875]: nova_compute Feb 20 04:24:10 localhost podman[229913]: error opening file `/run/crun/1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64/status`: No such file or directory Feb 20 04:24:10 localhost podman[229902]: 2026-02-20 09:24:10.261092059 +0000 UTC m=+0.066839220 container cleanup 1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:24:10 localhost podman[229902]: nova_compute Feb 20 04:24:10 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Feb 20 04:24:10 localhost systemd[1]: Stopped nova_compute container. Feb 20 04:24:10 localhost systemd[1]: Starting nova_compute container... Feb 20 04:24:10 localhost systemd[1]: Started libcrun container. Feb 20 04:24:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05ec5658aa743546307022d9fca1129054ee3b9d31d4b797362fdf17ebc9ce/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Feb 20 04:24:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05ec5658aa743546307022d9fca1129054ee3b9d31d4b797362fdf17ebc9ce/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Feb 20 04:24:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05ec5658aa743546307022d9fca1129054ee3b9d31d4b797362fdf17ebc9ce/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 20 04:24:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05ec5658aa743546307022d9fca1129054ee3b9d31d4b797362fdf17ebc9ce/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 04:24:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a05ec5658aa743546307022d9fca1129054ee3b9d31d4b797362fdf17ebc9ce/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 04:24:10 localhost podman[229915]: 2026-02-20 09:24:10.391023158 +0000 UTC m=+0.100958170 container init 1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=nova_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 20 04:24:10 localhost podman[229915]: 2026-02-20 09:24:10.400071939 +0000 UTC m=+0.110006941 container start 1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=nova_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3) Feb 20 04:24:10 localhost podman[229915]: nova_compute Feb 20 04:24:10 localhost nova_compute[229929]: + sudo -E kolla_set_configs Feb 20 04:24:10 localhost systemd[1]: Started nova_compute container. Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Validating config file Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Copying service configuration files Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Deleting /etc/nova/nova.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /etc/nova/nova.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Copying /var/lib/kolla/config_files/src/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Deleting /etc/ceph Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Creating directory /etc/ceph Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /etc/ceph Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Deleting /usr/sbin/iscsiadm Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Writing out command to execute Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:24:10 localhost nova_compute[229929]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 20 04:24:10 localhost nova_compute[229929]: ++ cat /run_command Feb 20 04:24:10 localhost nova_compute[229929]: + CMD=nova-compute Feb 20 04:24:10 localhost nova_compute[229929]: + ARGS= Feb 20 04:24:10 localhost nova_compute[229929]: + sudo kolla_copy_cacerts Feb 20 04:24:10 localhost nova_compute[229929]: + [[ ! -n '' ]] Feb 20 04:24:10 localhost nova_compute[229929]: + . kolla_extend_start Feb 20 04:24:10 localhost nova_compute[229929]: + echo 'Running command: '\''nova-compute'\''' Feb 20 04:24:10 localhost nova_compute[229929]: Running command: 'nova-compute' Feb 20 04:24:10 localhost nova_compute[229929]: + umask 0022 Feb 20 04:24:10 localhost nova_compute[229929]: + exec nova-compute Feb 20 04:24:12 localhost python3.9[230051]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.086 229933 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.087 229933 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.087 229933 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.087 229933 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.201 229933 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.222 229933 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.223 229933 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Feb 20 04:24:12 localhost systemd[1]: Started libpod-conmon-66af039b890df51100ccb41f4acf5517eb836b613e1f9e398f4f08e1ae1ca156.scope. Feb 20 04:24:12 localhost systemd[1]: Started libcrun container. Feb 20 04:24:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59139e5cfb176de4e895119dee08056e1c4ec3ee5e373c6cec12a916b3be6f7b/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Feb 20 04:24:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59139e5cfb176de4e895119dee08056e1c4ec3ee5e373c6cec12a916b3be6f7b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 04:24:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59139e5cfb176de4e895119dee08056e1c4ec3ee5e373c6cec12a916b3be6f7b/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Feb 20 04:24:12 localhost podman[230078]: 2026-02-20 09:24:12.346726945 +0000 UTC m=+0.115952984 container init 66af039b890df51100ccb41f4acf5517eb836b613e1f9e398f4f08e1ae1ca156 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=nova_compute_init, container_name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:24:12 localhost podman[230078]: 2026-02-20 09:24:12.363051192 +0000 UTC m=+0.132277221 container start 66af039b890df51100ccb41f4acf5517eb836b613e1f9e398f4f08e1ae1ca156 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=nova_compute_init) Feb 20 04:24:12 localhost python3.9[230051]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Applying nova statedir ownership Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/fc52238ffcbdcb325c6bf3fe6412477fc4bdb6cd9151f39289b74f25e08e0db9 Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/d301d14069645d8c23fee2987984776b3e88a570e1aa96d6cf3e31fa880385fd Feb 20 04:24:12 localhost nova_compute_init[230099]: INFO:nova_statedir:Nova statedir ownership complete Feb 20 04:24:12 localhost systemd[1]: libpod-66af039b890df51100ccb41f4acf5517eb836b613e1f9e398f4f08e1ae1ca156.scope: Deactivated successfully. Feb 20 04:24:12 localhost podman[230100]: 2026-02-20 09:24:12.433207895 +0000 UTC m=+0.053622642 container died 66af039b890df51100ccb41f4acf5517eb836b613e1f9e398f4f08e1ae1ca156 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init) Feb 20 04:24:12 localhost podman[230108]: 2026-02-20 09:24:12.582571642 +0000 UTC m=+0.174624804 container cleanup 66af039b890df51100ccb41f4acf5517eb836b613e1f9e398f4f08e1ae1ca156 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260127, config_id=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible) Feb 20 04:24:12 localhost systemd[1]: libpod-conmon-66af039b890df51100ccb41f4acf5517eb836b613e1f9e398f4f08e1ae1ca156.scope: Deactivated successfully. Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.618 229933 INFO nova.virt.driver [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.731 229933 INFO nova.compute.provider_config [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.739 229933 WARNING nova.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.740 229933 DEBUG oslo_concurrency.lockutils [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.740 229933 DEBUG oslo_concurrency.lockutils [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.740 229933 DEBUG oslo_concurrency.lockutils [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.740 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.740 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.741 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.741 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.741 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.741 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.741 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.741 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.741 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.742 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.742 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.742 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.742 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.742 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.742 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.742 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.743 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.743 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.743 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.743 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] console_host = np0005625202.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.743 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.743 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.743 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.744 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.744 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.744 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.744 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.744 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.744 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.744 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.745 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.745 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.745 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.745 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.745 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.745 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.745 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.745 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.746 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] host = np0005625202.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.746 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.746 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.746 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.746 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.746 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.747 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.747 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.747 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.747 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.747 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.747 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.747 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.748 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.748 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.748 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.748 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.748 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.748 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.748 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.749 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.749 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.749 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.749 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.749 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.749 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.749 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.749 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.750 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.750 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.750 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.750 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.750 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.750 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.750 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.751 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.751 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.751 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.751 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.751 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.751 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.751 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.752 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.752 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.752 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.752 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.752 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.752 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.752 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.753 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.753 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.753 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.753 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.753 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.754 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.754 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.754 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.754 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.754 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.754 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.754 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.754 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.755 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.755 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.755 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.755 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.755 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.755 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.756 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.756 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.756 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.756 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.756 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.756 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.756 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.757 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.757 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.757 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.757 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.757 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.757 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.757 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.757 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.758 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.758 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.758 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.758 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.758 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.758 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.758 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.759 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.759 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.759 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.759 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.759 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.759 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.759 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.760 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.760 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.760 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.760 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.760 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.760 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.760 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.761 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.761 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.761 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.761 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.761 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.761 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.761 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.762 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.762 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.762 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.762 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.762 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.762 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.762 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.763 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.763 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.763 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.763 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.763 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.763 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.763 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.764 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.764 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.764 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.764 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.764 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.764 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.764 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.765 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.765 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.765 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.765 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.765 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.765 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.765 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.766 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.766 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.766 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.766 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.766 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.766 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.766 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.766 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.767 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.767 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.767 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.767 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.767 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.767 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.767 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.768 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.768 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.768 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.768 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.768 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.768 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.768 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.769 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.769 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.769 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.769 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.769 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.769 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.769 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.770 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.770 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.770 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.770 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.770 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.770 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.770 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.771 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.771 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.771 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.771 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.771 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.771 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.771 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.771 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.772 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.772 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.os_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.772 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.772 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.772 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.772 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.772 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.773 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.773 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.773 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.773 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.773 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.773 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.773 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.773 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.774 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.774 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.774 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.774 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.774 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.774 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.775 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.775 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.775 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.775 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.775 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.775 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.775 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.775 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.776 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.776 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.776 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.776 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.776 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.776 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.776 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.776 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.777 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.777 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.777 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.777 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.777 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.777 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.777 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.778 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.778 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.778 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.778 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.778 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.778 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.778 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.778 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.779 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.779 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.779 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.779 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.779 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.779 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.779 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.780 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.780 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.780 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.780 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.780 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.780 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.780 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.780 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.781 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.781 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.781 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.781 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.781 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.781 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.782 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.782 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.782 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.782 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.782 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.782 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.782 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.783 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.783 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.783 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.783 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.783 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.783 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.783 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.783 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.784 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.784 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.784 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.784 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.784 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.784 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.784 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.785 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.785 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.785 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.785 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.785 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.785 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.785 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.786 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.786 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.786 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.786 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.786 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.786 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.787 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.787 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.787 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.787 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.787 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.787 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.787 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.788 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.788 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.788 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.788 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.788 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.788 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.788 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.789 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.789 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.789 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.789 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.789 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.789 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.789 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.790 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.790 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.790 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.790 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.790 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.790 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.791 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.791 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.791 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.791 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.791 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.791 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.791 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.791 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.792 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.792 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.792 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.792 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.792 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.792 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.792 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.793 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.793 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.793 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.793 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.793 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.793 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.793 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.793 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.794 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.794 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.794 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.794 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.794 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.794 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.794 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.795 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.795 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.795 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.795 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.795 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.795 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.795 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.795 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.796 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.barbican_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.796 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.796 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.796 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.796 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.796 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.797 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.797 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.797 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.797 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.797 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.797 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.797 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.797 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.798 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.798 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.798 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.798 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.798 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.798 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.798 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.798 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.799 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.799 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.799 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.799 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.799 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.799 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.799 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.800 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.800 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.800 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.800 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.800 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.800 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.800 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.801 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.801 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.801 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.801 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.801 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.801 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.801 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.801 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.802 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.802 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.802 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.802 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.802 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.802 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.802 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.803 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.803 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.803 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.803 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.803 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.803 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.803 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.804 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.804 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.804 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.804 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.804 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.804 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.804 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.805 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.805 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.805 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.805 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.805 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.805 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.805 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.805 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.806 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.806 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.806 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.806 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.806 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.806 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.806 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.807 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.807 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.807 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.807 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.807 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.807 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.807 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.808 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.808 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.808 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.808 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.808 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.808 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.808 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.808 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.809 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.809 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.809 229933 WARNING oslo_config.cfg [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Feb 20 04:24:12 localhost nova_compute[229929]: live_migration_uri is deprecated for removal in favor of two other options that Feb 20 04:24:12 localhost nova_compute[229929]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Feb 20 04:24:12 localhost nova_compute[229929]: and ``live_migration_inbound_addr`` respectively. Feb 20 04:24:12 localhost nova_compute[229929]: ). Its value may be silently ignored in the future.#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.809 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.809 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.809 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.810 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.810 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.810 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.810 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.810 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.810 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.810 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.811 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.811 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.811 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.811 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.811 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.811 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.811 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.812 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.812 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.rbd_secret_uuid = a8557ee9-b55d-5519-942c-cf8f6172f1d8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.812 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.812 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.812 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.812 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.812 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.813 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.813 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.813 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.813 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.813 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.813 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.813 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.814 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.814 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.814 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.814 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.814 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.814 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.814 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.815 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.815 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.815 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.815 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.815 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.815 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.815 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.816 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.816 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.816 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.816 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.816 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.816 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.816 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.817 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.817 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.817 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.817 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.817 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.817 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.817 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.818 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.818 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.818 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.818 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.818 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.818 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.818 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.818 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.819 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.819 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.819 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.819 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.819 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.819 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.820 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.820 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.820 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.820 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.820 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.820 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.820 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.821 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.821 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.821 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.821 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.821 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.821 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.821 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.822 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.822 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.822 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.822 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.822 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.822 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.822 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.823 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.823 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.823 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.823 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.823 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.823 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.823 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.824 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.824 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.824 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.824 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.824 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.824 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.824 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.825 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.825 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.825 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.825 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.825 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.825 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.825 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.826 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.826 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.826 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.826 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.826 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.826 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.826 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.827 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.827 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.827 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.827 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.827 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.827 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.827 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.828 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.828 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.828 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.828 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.828 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.828 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.828 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.829 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.829 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.829 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.829 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.829 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.829 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.830 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.830 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.830 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.830 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.830 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.830 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.830 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.830 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.831 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.831 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.831 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.831 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.831 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.831 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.831 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.832 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.832 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.832 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.832 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.832 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.832 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.832 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.833 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.833 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.833 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.833 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.833 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.833 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.834 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.834 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.834 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.834 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.834 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.834 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.834 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.835 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.835 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.835 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.835 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.835 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.835 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.835 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.836 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.836 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.836 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.836 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.836 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.836 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.836 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.837 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.837 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.837 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.837 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.837 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.837 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.838 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.838 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.838 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.838 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.838 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.838 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.838 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.839 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.839 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.839 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.839 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.839 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.839 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.839 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.839 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.840 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.840 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.840 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.840 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.840 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.840 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.840 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.841 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.841 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.841 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.841 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.841 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.841 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.841 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.842 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.842 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.842 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.842 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.842 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.842 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.842 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.843 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.843 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.843 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.843 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.843 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.843 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.843 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.844 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.844 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.844 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.844 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.844 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.844 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.844 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.845 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.845 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.845 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.845 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.845 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.846 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.846 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.846 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.846 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.846 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.846 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.846 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.847 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.847 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.847 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.847 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.847 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.847 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.847 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.848 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.848 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.848 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.848 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.848 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.848 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.848 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.848 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.849 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.849 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.849 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.849 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.849 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.849 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.849 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.850 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.850 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.850 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.850 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.850 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.850 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.850 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.851 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.851 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.851 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.851 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.851 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.851 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.851 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.852 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.852 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.852 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.852 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.852 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.852 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.852 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.853 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.853 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.853 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.853 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.853 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.853 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.853 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.854 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.854 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.854 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.854 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.854 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.854 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.854 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.855 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.855 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.855 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.855 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.855 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.855 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.855 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.856 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.856 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.856 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.856 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.856 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.856 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.856 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.857 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.857 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.857 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.857 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.857 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.857 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.857 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.858 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.858 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.858 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.858 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.858 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.858 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.858 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.859 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.859 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.859 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.859 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.859 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.859 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.859 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.860 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.860 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.860 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.860 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.860 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.860 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.860 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.861 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.861 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.861 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.861 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.861 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.861 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.861 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.861 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.862 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.862 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.862 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.862 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.862 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.862 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.862 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.863 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.863 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.863 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.863 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.863 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.863 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.863 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.863 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.864 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.864 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.864 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.864 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.864 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.864 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.864 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.865 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.865 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.865 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.865 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.865 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.865 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.865 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.866 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.866 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.866 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.866 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.866 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.866 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.866 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.867 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.867 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.867 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.867 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.867 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.867 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.867 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.868 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.868 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.868 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.868 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.868 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.868 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.868 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.868 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.869 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.869 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.869 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.869 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.869 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.869 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.869 229933 DEBUG oslo_service.service [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.870 229933 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.891 229933 INFO nova.virt.node [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Determined node identity 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from /var/lib/nova/compute_id#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.891 229933 DEBUG nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.892 229933 DEBUG nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.892 229933 DEBUG nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.892 229933 DEBUG nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.901 229933 DEBUG nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.904 229933 DEBUG nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.905 229933 INFO nova.virt.libvirt.driver [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Connection event '1' reason 'None'#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.911 229933 INFO nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Libvirt host capabilities Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: 61530aa3-6295-40fa-9f19-edfd227b2bca Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: x86_64 Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Rome-v4 Feb 20 04:24:12 localhost nova_compute[229929]: AMD Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: tcp Feb 20 04:24:12 localhost nova_compute[229929]: rdma Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: 16116612 Feb 20 04:24:12 localhost nova_compute[229929]: 4029153 Feb 20 04:24:12 localhost nova_compute[229929]: 0 Feb 20 04:24:12 localhost nova_compute[229929]: 0 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: selinux Feb 20 04:24:12 localhost nova_compute[229929]: 0 Feb 20 04:24:12 localhost nova_compute[229929]: system_u:system_r:svirt_t:s0 Feb 20 04:24:12 localhost nova_compute[229929]: system_u:system_r:svirt_tcg_t:s0 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: dac Feb 20 04:24:12 localhost nova_compute[229929]: 0 Feb 20 04:24:12 localhost nova_compute[229929]: +107:+107 Feb 20 04:24:12 localhost nova_compute[229929]: +107:+107 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: hvm Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: 32 Feb 20 04:24:12 localhost nova_compute[229929]: /usr/libexec/qemu-kvm Feb 20 04:24:12 localhost nova_compute[229929]: pc-i440fx-rhel7.6.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel9.8.0 Feb 20 04:24:12 localhost nova_compute[229929]: q35 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel9.6.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.6.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel9.4.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.5.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.3.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel7.6.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.4.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel9.2.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.2.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel9.0.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.0.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.1.0 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: hvm Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: 64 Feb 20 04:24:12 localhost nova_compute[229929]: /usr/libexec/qemu-kvm Feb 20 04:24:12 localhost nova_compute[229929]: pc-i440fx-rhel7.6.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel9.8.0 Feb 20 04:24:12 localhost nova_compute[229929]: q35 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel9.6.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.6.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel9.4.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.5.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.3.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel7.6.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.4.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel9.2.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.2.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel9.0.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.0.0 Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel8.1.0 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: #033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.917 229933 DEBUG nova.virt.libvirt.volume.mount [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.919 229933 DEBUG nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.925 229933 DEBUG nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: /usr/libexec/qemu-kvm Feb 20 04:24:12 localhost nova_compute[229929]: kvm Feb 20 04:24:12 localhost nova_compute[229929]: pc-i440fx-rhel7.6.0 Feb 20 04:24:12 localhost nova_compute[229929]: i686 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: rom Feb 20 04:24:12 localhost nova_compute[229929]: pflash Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: yes Feb 20 04:24:12 localhost nova_compute[229929]: no Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: no Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: on Feb 20 04:24:12 localhost nova_compute[229929]: off Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: on Feb 20 04:24:12 localhost nova_compute[229929]: off Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Rome Feb 20 04:24:12 localhost nova_compute[229929]: AMD Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: 486 Feb 20 04:24:12 localhost nova_compute[229929]: 486-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Broadwell Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Broadwell-IBRS Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Broadwell-noTSX Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Broadwell-noTSX-IBRS Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Broadwell-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Broadwell-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Broadwell-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Broadwell-v4 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Cascadelake-Server Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Cascadelake-Server-noTSX Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Cascadelake-Server-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Cascadelake-Server-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Cascadelake-Server-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Cascadelake-Server-v4 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Cascadelake-Server-v5 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: ClearwaterForest Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: ClearwaterForest-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Conroe Feb 20 04:24:12 localhost nova_compute[229929]: Conroe-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Cooperlake Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Cooperlake-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Cooperlake-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Denverton Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Denverton-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Denverton-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Denverton-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Dhyana Feb 20 04:24:12 localhost nova_compute[229929]: Dhyana-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Dhyana-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Genoa Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Genoa-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Genoa-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-IBPB Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Milan Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Milan-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Milan-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Milan-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Rome Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Rome-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Rome-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Rome-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Rome-v4 Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Rome-v5 Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Turin Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-Turin-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-v1 Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-v2 Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-v4 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: EPYC-v5 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: GraniteRapids Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: GraniteRapids-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: GraniteRapids-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: GraniteRapids-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Haswell Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Haswell-IBRS Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Haswell-noTSX Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Haswell-noTSX-IBRS Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Haswell-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Haswell-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Haswell-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Haswell-v4 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Icelake-Server Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Icelake-Server-noTSX Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Icelake-Server-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Icelake-Server-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Icelake-Server-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Icelake-Server-v4 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Icelake-Server-v5 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Icelake-Server-v6 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Icelake-Server-v7 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: IvyBridge Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: IvyBridge-IBRS Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: IvyBridge-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: IvyBridge-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: KnightsMill Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: KnightsMill-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Nehalem Feb 20 04:24:12 localhost nova_compute[229929]: Nehalem-IBRS Feb 20 04:24:12 localhost nova_compute[229929]: Nehalem-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Nehalem-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Opteron_G1 Feb 20 04:24:12 localhost nova_compute[229929]: Opteron_G1-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Opteron_G2 Feb 20 04:24:12 localhost nova_compute[229929]: Opteron_G2-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Opteron_G3 Feb 20 04:24:12 localhost nova_compute[229929]: Opteron_G3-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Opteron_G4 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Opteron_G4-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Opteron_G5 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Opteron_G5-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Penryn Feb 20 04:24:12 localhost nova_compute[229929]: Penryn-v1 Feb 20 04:24:12 localhost nova_compute[229929]: SandyBridge Feb 20 04:24:12 localhost nova_compute[229929]: SandyBridge-IBRS Feb 20 04:24:12 localhost nova_compute[229929]: SandyBridge-v1 Feb 20 04:24:12 localhost nova_compute[229929]: SandyBridge-v2 Feb 20 04:24:12 localhost nova_compute[229929]: SapphireRapids Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: SapphireRapids-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: SapphireRapids-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: SapphireRapids-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: SapphireRapids-v4 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: SierraForest Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: SierraForest-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: SierraForest-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: SierraForest-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Client Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Client-IBRS Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Client-noTSX-IBRS Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Client-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Client-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Client-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Client-v4 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Server Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Server-IBRS Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Server-noTSX-IBRS Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Server-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Server-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Server-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Server-v4 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Skylake-Server-v5 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Snowridge Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Snowridge-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Snowridge-v2 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Snowridge-v3 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Snowridge-v4 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Westmere Feb 20 04:24:12 localhost nova_compute[229929]: Westmere-IBRS Feb 20 04:24:12 localhost nova_compute[229929]: Westmere-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Westmere-v2 Feb 20 04:24:12 localhost nova_compute[229929]: athlon Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: athlon-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: core2duo Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: core2duo-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: coreduo Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: coreduo-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: kvm32 Feb 20 04:24:12 localhost nova_compute[229929]: kvm32-v1 Feb 20 04:24:12 localhost nova_compute[229929]: kvm64 Feb 20 04:24:12 localhost nova_compute[229929]: kvm64-v1 Feb 20 04:24:12 localhost nova_compute[229929]: n270 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: n270-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: pentium Feb 20 04:24:12 localhost nova_compute[229929]: pentium-v1 Feb 20 04:24:12 localhost nova_compute[229929]: pentium2 Feb 20 04:24:12 localhost nova_compute[229929]: pentium2-v1 Feb 20 04:24:12 localhost nova_compute[229929]: pentium3 Feb 20 04:24:12 localhost nova_compute[229929]: pentium3-v1 Feb 20 04:24:12 localhost nova_compute[229929]: phenom Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: phenom-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: qemu32 Feb 20 04:24:12 localhost nova_compute[229929]: qemu32-v1 Feb 20 04:24:12 localhost nova_compute[229929]: qemu64 Feb 20 04:24:12 localhost nova_compute[229929]: qemu64-v1 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: file Feb 20 04:24:12 localhost nova_compute[229929]: anonymous Feb 20 04:24:12 localhost nova_compute[229929]: memfd Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: disk Feb 20 04:24:12 localhost nova_compute[229929]: cdrom Feb 20 04:24:12 localhost nova_compute[229929]: floppy Feb 20 04:24:12 localhost nova_compute[229929]: lun Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: ide Feb 20 04:24:12 localhost nova_compute[229929]: fdc Feb 20 04:24:12 localhost nova_compute[229929]: scsi Feb 20 04:24:12 localhost nova_compute[229929]: virtio Feb 20 04:24:12 localhost nova_compute[229929]: usb Feb 20 04:24:12 localhost nova_compute[229929]: sata Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: virtio Feb 20 04:24:12 localhost nova_compute[229929]: virtio-transitional Feb 20 04:24:12 localhost nova_compute[229929]: virtio-non-transitional Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: vnc Feb 20 04:24:12 localhost nova_compute[229929]: egl-headless Feb 20 04:24:12 localhost nova_compute[229929]: dbus Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: subsystem Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: default Feb 20 04:24:12 localhost nova_compute[229929]: mandatory Feb 20 04:24:12 localhost nova_compute[229929]: requisite Feb 20 04:24:12 localhost nova_compute[229929]: optional Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: usb Feb 20 04:24:12 localhost nova_compute[229929]: pci Feb 20 04:24:12 localhost nova_compute[229929]: scsi Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: virtio Feb 20 04:24:12 localhost nova_compute[229929]: virtio-transitional Feb 20 04:24:12 localhost nova_compute[229929]: virtio-non-transitional Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: random Feb 20 04:24:12 localhost nova_compute[229929]: egd Feb 20 04:24:12 localhost nova_compute[229929]: builtin Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: path Feb 20 04:24:12 localhost nova_compute[229929]: handle Feb 20 04:24:12 localhost nova_compute[229929]: virtiofs Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: tpm-tis Feb 20 04:24:12 localhost nova_compute[229929]: tpm-crb Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: emulator Feb 20 04:24:12 localhost nova_compute[229929]: external Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: 2.0 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: usb Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: pty Feb 20 04:24:12 localhost nova_compute[229929]: unix Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: qemu Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: builtin Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: default Feb 20 04:24:12 localhost nova_compute[229929]: passt Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: isa Feb 20 04:24:12 localhost nova_compute[229929]: hyperv Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: null Feb 20 04:24:12 localhost nova_compute[229929]: vc Feb 20 04:24:12 localhost nova_compute[229929]: pty Feb 20 04:24:12 localhost nova_compute[229929]: dev Feb 20 04:24:12 localhost nova_compute[229929]: file Feb 20 04:24:12 localhost nova_compute[229929]: pipe Feb 20 04:24:12 localhost nova_compute[229929]: stdio Feb 20 04:24:12 localhost nova_compute[229929]: udp Feb 20 04:24:12 localhost nova_compute[229929]: tcp Feb 20 04:24:12 localhost nova_compute[229929]: unix Feb 20 04:24:12 localhost nova_compute[229929]: qemu-vdagent Feb 20 04:24:12 localhost nova_compute[229929]: dbus Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: relaxed Feb 20 04:24:12 localhost nova_compute[229929]: vapic Feb 20 04:24:12 localhost nova_compute[229929]: spinlocks Feb 20 04:24:12 localhost nova_compute[229929]: vpindex Feb 20 04:24:12 localhost nova_compute[229929]: runtime Feb 20 04:24:12 localhost nova_compute[229929]: synic Feb 20 04:24:12 localhost nova_compute[229929]: stimer Feb 20 04:24:12 localhost nova_compute[229929]: reset Feb 20 04:24:12 localhost nova_compute[229929]: vendor_id Feb 20 04:24:12 localhost nova_compute[229929]: frequencies Feb 20 04:24:12 localhost nova_compute[229929]: reenlightenment Feb 20 04:24:12 localhost nova_compute[229929]: tlbflush Feb 20 04:24:12 localhost nova_compute[229929]: ipi Feb 20 04:24:12 localhost nova_compute[229929]: avic Feb 20 04:24:12 localhost nova_compute[229929]: emsr_bitmap Feb 20 04:24:12 localhost nova_compute[229929]: xmm_input Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: 4095 Feb 20 04:24:12 localhost nova_compute[229929]: on Feb 20 04:24:12 localhost nova_compute[229929]: off Feb 20 04:24:12 localhost nova_compute[229929]: off Feb 20 04:24:12 localhost nova_compute[229929]: Linux KVM Hv Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:24:12 localhost nova_compute[229929]: 2026-02-20 09:24:12.932 229933 DEBUG nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: /usr/libexec/qemu-kvm Feb 20 04:24:12 localhost nova_compute[229929]: kvm Feb 20 04:24:12 localhost nova_compute[229929]: pc-q35-rhel9.8.0 Feb 20 04:24:12 localhost nova_compute[229929]: i686 Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: rom Feb 20 04:24:12 localhost nova_compute[229929]: pflash Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: yes Feb 20 04:24:12 localhost nova_compute[229929]: no Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: no Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: on Feb 20 04:24:12 localhost nova_compute[229929]: off Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:12 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: on Feb 20 04:24:13 localhost nova_compute[229929]: off Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome Feb 20 04:24:13 localhost nova_compute[229929]: AMD Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: 486 Feb 20 04:24:13 localhost nova_compute[229929]: 486-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-noTSX Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-noTSX-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-noTSX Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: ClearwaterForest Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: ClearwaterForest-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Conroe Feb 20 04:24:13 localhost nova_compute[229929]: Conroe-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Cooperlake Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cooperlake-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cooperlake-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Denverton Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Denverton-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Denverton-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Denverton-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Dhyana Feb 20 04:24:13 localhost nova_compute[229929]: Dhyana-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Dhyana-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Genoa Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Genoa-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Genoa-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-IBPB Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Milan Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Milan-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Milan-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Milan-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v4 Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v5 Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Turin Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Turin-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v1 Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v2 Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: GraniteRapids Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: GraniteRapids-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: GraniteRapids-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: GraniteRapids-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-noTSX Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-noTSX-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-noTSX Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v6 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v7 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: IvyBridge Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: IvyBridge-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: IvyBridge-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: IvyBridge-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: KnightsMill Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: KnightsMill-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Nehalem Feb 20 04:24:13 localhost nova_compute[229929]: Nehalem-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Nehalem-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Nehalem-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G1 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G1-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G2 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G2-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G3 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G3-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G4-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G5-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Penryn Feb 20 04:24:13 localhost nova_compute[229929]: Penryn-v1 Feb 20 04:24:13 localhost nova_compute[229929]: SandyBridge Feb 20 04:24:13 localhost nova_compute[229929]: SandyBridge-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: SandyBridge-v1 Feb 20 04:24:13 localhost nova_compute[229929]: SandyBridge-v2 Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SierraForest Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SierraForest-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SierraForest-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SierraForest-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-noTSX-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-noTSX-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Snowridge Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Snowridge-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Snowridge-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Snowridge-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Snowridge-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Westmere Feb 20 04:24:13 localhost nova_compute[229929]: Westmere-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Westmere-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Westmere-v2 Feb 20 04:24:13 localhost nova_compute[229929]: athlon Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: athlon-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: core2duo Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: core2duo-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: coreduo Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: coreduo-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: kvm32 Feb 20 04:24:13 localhost nova_compute[229929]: kvm32-v1 Feb 20 04:24:13 localhost nova_compute[229929]: kvm64 Feb 20 04:24:13 localhost nova_compute[229929]: kvm64-v1 Feb 20 04:24:13 localhost nova_compute[229929]: n270 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: n270-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: pentium Feb 20 04:24:13 localhost nova_compute[229929]: pentium-v1 Feb 20 04:24:13 localhost nova_compute[229929]: pentium2 Feb 20 04:24:13 localhost nova_compute[229929]: pentium2-v1 Feb 20 04:24:13 localhost nova_compute[229929]: pentium3 Feb 20 04:24:13 localhost nova_compute[229929]: pentium3-v1 Feb 20 04:24:13 localhost nova_compute[229929]: phenom Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: phenom-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: qemu32 Feb 20 04:24:13 localhost nova_compute[229929]: qemu32-v1 Feb 20 04:24:13 localhost nova_compute[229929]: qemu64 Feb 20 04:24:13 localhost nova_compute[229929]: qemu64-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: file Feb 20 04:24:13 localhost nova_compute[229929]: anonymous Feb 20 04:24:13 localhost nova_compute[229929]: memfd Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: disk Feb 20 04:24:13 localhost nova_compute[229929]: cdrom Feb 20 04:24:13 localhost nova_compute[229929]: floppy Feb 20 04:24:13 localhost nova_compute[229929]: lun Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: fdc Feb 20 04:24:13 localhost nova_compute[229929]: scsi Feb 20 04:24:13 localhost nova_compute[229929]: virtio Feb 20 04:24:13 localhost nova_compute[229929]: usb Feb 20 04:24:13 localhost nova_compute[229929]: sata Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: virtio Feb 20 04:24:13 localhost nova_compute[229929]: virtio-transitional Feb 20 04:24:13 localhost nova_compute[229929]: virtio-non-transitional Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: vnc Feb 20 04:24:13 localhost nova_compute[229929]: egl-headless Feb 20 04:24:13 localhost nova_compute[229929]: dbus Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: subsystem Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: default Feb 20 04:24:13 localhost nova_compute[229929]: mandatory Feb 20 04:24:13 localhost nova_compute[229929]: requisite Feb 20 04:24:13 localhost nova_compute[229929]: optional Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: usb Feb 20 04:24:13 localhost nova_compute[229929]: pci Feb 20 04:24:13 localhost nova_compute[229929]: scsi Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: virtio Feb 20 04:24:13 localhost nova_compute[229929]: virtio-transitional Feb 20 04:24:13 localhost nova_compute[229929]: virtio-non-transitional Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: random Feb 20 04:24:13 localhost nova_compute[229929]: egd Feb 20 04:24:13 localhost nova_compute[229929]: builtin Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: path Feb 20 04:24:13 localhost nova_compute[229929]: handle Feb 20 04:24:13 localhost nova_compute[229929]: virtiofs Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: tpm-tis Feb 20 04:24:13 localhost nova_compute[229929]: tpm-crb Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: emulator Feb 20 04:24:13 localhost nova_compute[229929]: external Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: 2.0 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: usb Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: pty Feb 20 04:24:13 localhost nova_compute[229929]: unix Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: qemu Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: builtin Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: default Feb 20 04:24:13 localhost nova_compute[229929]: passt Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: isa Feb 20 04:24:13 localhost nova_compute[229929]: hyperv Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: null Feb 20 04:24:13 localhost nova_compute[229929]: vc Feb 20 04:24:13 localhost nova_compute[229929]: pty Feb 20 04:24:13 localhost nova_compute[229929]: dev Feb 20 04:24:13 localhost nova_compute[229929]: file Feb 20 04:24:13 localhost nova_compute[229929]: pipe Feb 20 04:24:13 localhost nova_compute[229929]: stdio Feb 20 04:24:13 localhost nova_compute[229929]: udp Feb 20 04:24:13 localhost nova_compute[229929]: tcp Feb 20 04:24:13 localhost nova_compute[229929]: unix Feb 20 04:24:13 localhost nova_compute[229929]: qemu-vdagent Feb 20 04:24:13 localhost nova_compute[229929]: dbus Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: relaxed Feb 20 04:24:13 localhost nova_compute[229929]: vapic Feb 20 04:24:13 localhost nova_compute[229929]: spinlocks Feb 20 04:24:13 localhost nova_compute[229929]: vpindex Feb 20 04:24:13 localhost nova_compute[229929]: runtime Feb 20 04:24:13 localhost nova_compute[229929]: synic Feb 20 04:24:13 localhost nova_compute[229929]: stimer Feb 20 04:24:13 localhost nova_compute[229929]: reset Feb 20 04:24:13 localhost nova_compute[229929]: vendor_id Feb 20 04:24:13 localhost nova_compute[229929]: frequencies Feb 20 04:24:13 localhost nova_compute[229929]: reenlightenment Feb 20 04:24:13 localhost nova_compute[229929]: tlbflush Feb 20 04:24:13 localhost nova_compute[229929]: ipi Feb 20 04:24:13 localhost nova_compute[229929]: avic Feb 20 04:24:13 localhost nova_compute[229929]: emsr_bitmap Feb 20 04:24:13 localhost nova_compute[229929]: xmm_input Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: 4095 Feb 20 04:24:13 localhost nova_compute[229929]: on Feb 20 04:24:13 localhost nova_compute[229929]: off Feb 20 04:24:13 localhost nova_compute[229929]: off Feb 20 04:24:13 localhost nova_compute[229929]: Linux KVM Hv Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:24:13 localhost nova_compute[229929]: 2026-02-20 09:24:12.985 229933 DEBUG nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 20 04:24:13 localhost nova_compute[229929]: 2026-02-20 09:24:12.988 229933 DEBUG nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: /usr/libexec/qemu-kvm Feb 20 04:24:13 localhost nova_compute[229929]: kvm Feb 20 04:24:13 localhost nova_compute[229929]: pc-i440fx-rhel7.6.0 Feb 20 04:24:13 localhost nova_compute[229929]: x86_64 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: rom Feb 20 04:24:13 localhost nova_compute[229929]: pflash Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: yes Feb 20 04:24:13 localhost nova_compute[229929]: no Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: no Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: on Feb 20 04:24:13 localhost nova_compute[229929]: off Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: on Feb 20 04:24:13 localhost nova_compute[229929]: off Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome Feb 20 04:24:13 localhost nova_compute[229929]: AMD Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: 486 Feb 20 04:24:13 localhost nova_compute[229929]: 486-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-noTSX Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-noTSX-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-noTSX Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: ClearwaterForest Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: ClearwaterForest-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Conroe Feb 20 04:24:13 localhost nova_compute[229929]: Conroe-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Cooperlake Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cooperlake-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cooperlake-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Denverton Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Denverton-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Denverton-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Denverton-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Dhyana Feb 20 04:24:13 localhost nova_compute[229929]: Dhyana-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Dhyana-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Genoa Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Genoa-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Genoa-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-IBPB Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Milan Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Milan-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Milan-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Milan-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v4 Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v5 Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Turin Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Turin-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v1 Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v2 Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: GraniteRapids Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: GraniteRapids-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: GraniteRapids-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: GraniteRapids-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-noTSX Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-noTSX-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-noTSX Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost systemd[1]: session-53.scope: Deactivated successfully. Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost systemd[1]: session-53.scope: Consumed 1min 36.924s CPU time. Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost systemd-logind[760]: Session 53 logged out. Waiting for processes to exit. Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v6 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost systemd-logind[760]: Removed session 53. Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v7 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: IvyBridge Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: IvyBridge-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: IvyBridge-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: IvyBridge-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: KnightsMill Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: KnightsMill-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Nehalem Feb 20 04:24:13 localhost nova_compute[229929]: Nehalem-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Nehalem-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Nehalem-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G1 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G1-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G2 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G2-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G3 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G3-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G4-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G5-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Penryn Feb 20 04:24:13 localhost nova_compute[229929]: Penryn-v1 Feb 20 04:24:13 localhost nova_compute[229929]: SandyBridge Feb 20 04:24:13 localhost nova_compute[229929]: SandyBridge-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: SandyBridge-v1 Feb 20 04:24:13 localhost nova_compute[229929]: SandyBridge-v2 Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SierraForest Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SierraForest-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SierraForest-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SierraForest-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-noTSX-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-noTSX-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Snowridge Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Snowridge-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Snowridge-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Snowridge-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Snowridge-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Westmere Feb 20 04:24:13 localhost nova_compute[229929]: Westmere-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Westmere-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Westmere-v2 Feb 20 04:24:13 localhost nova_compute[229929]: athlon Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: athlon-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: core2duo Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: core2duo-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: coreduo Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: coreduo-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: kvm32 Feb 20 04:24:13 localhost nova_compute[229929]: kvm32-v1 Feb 20 04:24:13 localhost nova_compute[229929]: kvm64 Feb 20 04:24:13 localhost nova_compute[229929]: kvm64-v1 Feb 20 04:24:13 localhost nova_compute[229929]: n270 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: n270-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: pentium Feb 20 04:24:13 localhost nova_compute[229929]: pentium-v1 Feb 20 04:24:13 localhost nova_compute[229929]: pentium2 Feb 20 04:24:13 localhost nova_compute[229929]: pentium2-v1 Feb 20 04:24:13 localhost nova_compute[229929]: pentium3 Feb 20 04:24:13 localhost nova_compute[229929]: pentium3-v1 Feb 20 04:24:13 localhost nova_compute[229929]: phenom Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: phenom-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: qemu32 Feb 20 04:24:13 localhost nova_compute[229929]: qemu32-v1 Feb 20 04:24:13 localhost nova_compute[229929]: qemu64 Feb 20 04:24:13 localhost nova_compute[229929]: qemu64-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: file Feb 20 04:24:13 localhost nova_compute[229929]: anonymous Feb 20 04:24:13 localhost nova_compute[229929]: memfd Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: disk Feb 20 04:24:13 localhost nova_compute[229929]: cdrom Feb 20 04:24:13 localhost nova_compute[229929]: floppy Feb 20 04:24:13 localhost nova_compute[229929]: lun Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: ide Feb 20 04:24:13 localhost nova_compute[229929]: fdc Feb 20 04:24:13 localhost nova_compute[229929]: scsi Feb 20 04:24:13 localhost nova_compute[229929]: virtio Feb 20 04:24:13 localhost nova_compute[229929]: usb Feb 20 04:24:13 localhost nova_compute[229929]: sata Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: virtio Feb 20 04:24:13 localhost nova_compute[229929]: virtio-transitional Feb 20 04:24:13 localhost nova_compute[229929]: virtio-non-transitional Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: vnc Feb 20 04:24:13 localhost nova_compute[229929]: egl-headless Feb 20 04:24:13 localhost nova_compute[229929]: dbus Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: subsystem Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: default Feb 20 04:24:13 localhost nova_compute[229929]: mandatory Feb 20 04:24:13 localhost nova_compute[229929]: requisite Feb 20 04:24:13 localhost nova_compute[229929]: optional Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: usb Feb 20 04:24:13 localhost nova_compute[229929]: pci Feb 20 04:24:13 localhost nova_compute[229929]: scsi Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: virtio Feb 20 04:24:13 localhost nova_compute[229929]: virtio-transitional Feb 20 04:24:13 localhost nova_compute[229929]: virtio-non-transitional Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: random Feb 20 04:24:13 localhost nova_compute[229929]: egd Feb 20 04:24:13 localhost nova_compute[229929]: builtin Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: path Feb 20 04:24:13 localhost nova_compute[229929]: handle Feb 20 04:24:13 localhost nova_compute[229929]: virtiofs Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: tpm-tis Feb 20 04:24:13 localhost nova_compute[229929]: tpm-crb Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: emulator Feb 20 04:24:13 localhost nova_compute[229929]: external Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: 2.0 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: usb Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: pty Feb 20 04:24:13 localhost nova_compute[229929]: unix Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: qemu Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: builtin Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: default Feb 20 04:24:13 localhost nova_compute[229929]: passt Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: isa Feb 20 04:24:13 localhost nova_compute[229929]: hyperv Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: null Feb 20 04:24:13 localhost nova_compute[229929]: vc Feb 20 04:24:13 localhost nova_compute[229929]: pty Feb 20 04:24:13 localhost nova_compute[229929]: dev Feb 20 04:24:13 localhost nova_compute[229929]: file Feb 20 04:24:13 localhost nova_compute[229929]: pipe Feb 20 04:24:13 localhost nova_compute[229929]: stdio Feb 20 04:24:13 localhost nova_compute[229929]: udp Feb 20 04:24:13 localhost nova_compute[229929]: tcp Feb 20 04:24:13 localhost nova_compute[229929]: unix Feb 20 04:24:13 localhost nova_compute[229929]: qemu-vdagent Feb 20 04:24:13 localhost nova_compute[229929]: dbus Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: relaxed Feb 20 04:24:13 localhost nova_compute[229929]: vapic Feb 20 04:24:13 localhost nova_compute[229929]: spinlocks Feb 20 04:24:13 localhost nova_compute[229929]: vpindex Feb 20 04:24:13 localhost nova_compute[229929]: runtime Feb 20 04:24:13 localhost nova_compute[229929]: synic Feb 20 04:24:13 localhost nova_compute[229929]: stimer Feb 20 04:24:13 localhost nova_compute[229929]: reset Feb 20 04:24:13 localhost nova_compute[229929]: vendor_id Feb 20 04:24:13 localhost nova_compute[229929]: frequencies Feb 20 04:24:13 localhost nova_compute[229929]: reenlightenment Feb 20 04:24:13 localhost nova_compute[229929]: tlbflush Feb 20 04:24:13 localhost nova_compute[229929]: ipi Feb 20 04:24:13 localhost nova_compute[229929]: avic Feb 20 04:24:13 localhost nova_compute[229929]: emsr_bitmap Feb 20 04:24:13 localhost nova_compute[229929]: xmm_input Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: 4095 Feb 20 04:24:13 localhost nova_compute[229929]: on Feb 20 04:24:13 localhost nova_compute[229929]: off Feb 20 04:24:13 localhost nova_compute[229929]: off Feb 20 04:24:13 localhost nova_compute[229929]: Linux KVM Hv Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:24:13 localhost nova_compute[229929]: 2026-02-20 09:24:13.053 229933 DEBUG nova.virt.libvirt.host [None req-06fd4389-e8e3-4cb0-9015-66b633ef36ec - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: /usr/libexec/qemu-kvm Feb 20 04:24:13 localhost nova_compute[229929]: kvm Feb 20 04:24:13 localhost nova_compute[229929]: pc-q35-rhel9.8.0 Feb 20 04:24:13 localhost nova_compute[229929]: x86_64 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: efi Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Feb 20 04:24:13 localhost nova_compute[229929]: /usr/share/edk2/ovmf/OVMF_CODE.fd Feb 20 04:24:13 localhost nova_compute[229929]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Feb 20 04:24:13 localhost nova_compute[229929]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: rom Feb 20 04:24:13 localhost nova_compute[229929]: pflash Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: yes Feb 20 04:24:13 localhost nova_compute[229929]: no Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: yes Feb 20 04:24:13 localhost nova_compute[229929]: no Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: on Feb 20 04:24:13 localhost nova_compute[229929]: off Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: on Feb 20 04:24:13 localhost nova_compute[229929]: off Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome Feb 20 04:24:13 localhost nova_compute[229929]: AMD Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: 486 Feb 20 04:24:13 localhost nova_compute[229929]: 486-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-noTSX Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-noTSX-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Broadwell-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-noTSX Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cascadelake-Server-v5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: ClearwaterForest Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: ClearwaterForest-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Conroe Feb 20 04:24:13 localhost nova_compute[229929]: Conroe-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Cooperlake Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cooperlake-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Cooperlake-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Denverton Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Denverton-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Denverton-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Denverton-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Dhyana Feb 20 04:24:13 localhost nova_compute[229929]: Dhyana-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Dhyana-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Genoa Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Genoa-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Genoa-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-IBPB Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Milan Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Milan-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Milan-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Milan-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v4 Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Rome-v5 Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Turin Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-Turin-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v1 Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v2 Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: EPYC-v5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: GraniteRapids Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: GraniteRapids-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: GraniteRapids-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: GraniteRapids-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-noTSX Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-noTSX-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Haswell-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-noTSX Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v6 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Icelake-Server-v7 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: IvyBridge Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: IvyBridge-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: IvyBridge-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: IvyBridge-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: KnightsMill Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: KnightsMill-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Nehalem Feb 20 04:24:13 localhost nova_compute[229929]: Nehalem-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Nehalem-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Nehalem-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G1 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G1-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G2 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G2-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G3 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G3-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G4-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Opteron_G5-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Penryn Feb 20 04:24:13 localhost nova_compute[229929]: Penryn-v1 Feb 20 04:24:13 localhost nova_compute[229929]: SandyBridge Feb 20 04:24:13 localhost nova_compute[229929]: SandyBridge-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: SandyBridge-v1 Feb 20 04:24:13 localhost nova_compute[229929]: SandyBridge-v2 Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SapphireRapids-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SierraForest Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SierraForest-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SierraForest-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: SierraForest-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-noTSX-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Client-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-noTSX-IBRS Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v1 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v2 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v3 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v4 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Skylake-Server-v5 Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:24:13 localhost nova_compute[229929]: Feb 20 04:29:11 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 20 04:29:11 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 20 04:29:12 localhost rsyslogd[759]: imjournal: 2509 messages lost due to rate-limiting (20000 allowed within 600 seconds) Feb 20 04:29:12 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 20 04:29:12 localhost nova_compute[229929]: 2026-02-20 09:29:12.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:29:12 localhost nova_compute[229929]: 2026-02-20 09:29:12.232 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 20 04:29:12 localhost nova_compute[229929]: 2026-02-20 09:29:12.250 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 20 04:29:12 localhost nova_compute[229929]: 2026-02-20 09:29:12.252 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:29:12 localhost nova_compute[229929]: 2026-02-20 09:29:12.252 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 20 04:29:12 localhost nova_compute[229929]: 2026-02-20 09:29:12.266 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:29:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2635 DF PROTO=TCP SPT=58978 DPT=9100 SEQ=2279766673 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B05600D0000000001030307) Feb 20 04:29:14 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:29:14 localhost sshd[248235]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:29:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:29:14 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 20 04:29:14 localhost systemd[1]: tmp-crun.IbZyMo.mount: Deactivated successfully. Feb 20 04:29:14 localhost podman[248237]: 2026-02-20 09:29:14.196408292 +0000 UTC m=+0.132240422 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:29:14 localhost podman[248237]: 2026-02-20 09:29:14.20570707 +0000 UTC m=+0.141539220 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:29:14 localhost podman[248237]: unhealthy Feb 20 04:29:14 localhost nova_compute[229929]: 2026-02-20 09:29:14.281 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:29:14 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Main process exited, code=exited, status=1/FAILURE Feb 20 04:29:14 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Failed with result 'exit-code'. Feb 20 04:29:15 localhost nova_compute[229929]: 2026-02-20 09:29:15.228 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:29:15 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:15 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:29:15 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:29:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42818 DF PROTO=TCP SPT=59764 DPT=9102 SEQ=4085525964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B056A8E0000000001030307) Feb 20 04:29:16 localhost nova_compute[229929]: 2026-02-20 09:29:16.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:29:16 localhost nova_compute[229929]: 2026-02-20 09:29:16.233 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:29:16 localhost nova_compute[229929]: 2026-02-20 09:29:16.233 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:29:16 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:16 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:16 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:17 localhost nova_compute[229929]: 2026-02-20 09:29:17.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:29:17 localhost nova_compute[229929]: 2026-02-20 09:29:17.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.257 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.258 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.258 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.258 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.258 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.258 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.258 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.258 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.258 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.258 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:29:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:29:17 localhost nova_compute[229929]: 2026-02-20 09:29:17.265 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:29:17 localhost nova_compute[229929]: 2026-02-20 09:29:17.265 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:29:17 localhost nova_compute[229929]: 2026-02-20 09:29:17.266 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:29:17 localhost nova_compute[229929]: 2026-02-20 09:29:17.266 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:29:17 localhost nova_compute[229929]: 2026-02-20 09:29:17.266 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:29:17 localhost nova_compute[229929]: 2026-02-20 09:29:17.703 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:29:17 localhost nova_compute[229929]: 2026-02-20 09:29:17.938 229933 WARNING nova.virt.libvirt.driver [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:29:17 localhost nova_compute[229929]: 2026-02-20 09:29:17.940 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=13197MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:29:17 localhost nova_compute[229929]: 2026-02-20 09:29:17.941 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:29:17 localhost nova_compute[229929]: 2026-02-20 09:29:17.941 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.041 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.041 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.085 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Refreshing inventories for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.137 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Updating ProviderTree inventory for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.138 229933 DEBUG nova.compute.provider_tree [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Updating inventory in ProviderTree for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.158 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Refreshing aggregate associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.182 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Refreshing trait associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, traits: COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_AESNI,HW_CPU_X86_SSE4A,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.201 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.657 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.663 229933 DEBUG nova.compute.provider_tree [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.681 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.682 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:29:18 localhost nova_compute[229929]: 2026-02-20 09:29:18.682 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:29:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6939 DF PROTO=TCP SPT=57140 DPT=9882 SEQ=1433919283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B05770D0000000001030307) Feb 20 04:29:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:29:19 localhost podman[248303]: 2026-02-20 09:29:19.464370096 +0000 UTC m=+0.100224957 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:29:19 localhost podman[248303]: 2026-02-20 09:29:19.476934932 +0000 UTC m=+0.112789743 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:29:19 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 20 04:29:19 localhost systemd[1]: var-lib-containers-storage-overlay-cdc0bde398443eb2e66321c13dbbf9e41b2982bc548dca05f2f16736d23fdffa-merged.mount: Deactivated successfully. Feb 20 04:29:19 localhost nova_compute[229929]: 2026-02-20 09:29:19.683 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:29:19 localhost nova_compute[229929]: 2026-02-20 09:29:19.684 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:29:19 localhost nova_compute[229929]: 2026-02-20 09:29:19.684 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:29:19 localhost nova_compute[229929]: 2026-02-20 09:29:19.699 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:29:19 localhost nova_compute[229929]: 2026-02-20 09:29:19.699 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:29:19 localhost nova_compute[229929]: 2026-02-20 09:29:19.700 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:29:19 localhost systemd[1]: var-lib-containers-storage-overlay-cdc0bde398443eb2e66321c13dbbf9e41b2982bc548dca05f2f16736d23fdffa-merged.mount: Deactivated successfully. Feb 20 04:29:19 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:29:20 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:20 localhost systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully. Feb 20 04:29:20 localhost systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully. Feb 20 04:29:21 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:21 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:22 localhost systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully. Feb 20 04:29:22 localhost systemd[1]: var-lib-containers-storage-overlay-a01411a662cc18c483e7d36cf3e51a5ac62cc254f0cb5553632b0450604eba50-merged.mount: Deactivated successfully. Feb 20 04:29:22 localhost systemd[1]: var-lib-containers-storage-overlay-a01411a662cc18c483e7d36cf3e51a5ac62cc254f0cb5553632b0450604eba50-merged.mount: Deactivated successfully. Feb 20 04:29:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6940 DF PROTO=TCP SPT=57140 DPT=9882 SEQ=1433919283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0586CD0000000001030307) Feb 20 04:29:23 localhost systemd[1]: var-lib-containers-storage-overlay-c649efc911c887686c8351fe543502de582148a048396cbc7ad85b29ea075fe6-merged.mount: Deactivated successfully. Feb 20 04:29:24 localhost systemd[1]: var-lib-containers-storage-overlay-71578ef0b1e4f969b13e723033957534bc6c3b31bff47c5fdb42a55e43d4cef9-merged.mount: Deactivated successfully. Feb 20 04:29:24 localhost systemd[1]: var-lib-containers-storage-overlay-71578ef0b1e4f969b13e723033957534bc6c3b31bff47c5fdb42a55e43d4cef9-merged.mount: Deactivated successfully. Feb 20 04:29:25 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 20 04:29:25 localhost systemd[1]: var-lib-containers-storage-overlay-c649efc911c887686c8351fe543502de582148a048396cbc7ad85b29ea075fe6-merged.mount: Deactivated successfully. Feb 20 04:29:25 localhost systemd[1]: var-lib-containers-storage-overlay-c649efc911c887686c8351fe543502de582148a048396cbc7ad85b29ea075fe6-merged.mount: Deactivated successfully. Feb 20 04:29:26 localhost systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully. Feb 20 04:29:26 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 20 04:29:26 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 20 04:29:26 localhost systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully. Feb 20 04:29:27 localhost systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully. Feb 20 04:29:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38433 DF PROTO=TCP SPT=48380 DPT=9100 SEQ=1852599786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0598D50000000001030307) Feb 20 04:29:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34716 DF PROTO=TCP SPT=40300 DPT=9105 SEQ=3965117061 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0599F10000000001030307) Feb 20 04:29:28 localhost systemd[1]: var-lib-containers-storage-overlay-71578ef0b1e4f969b13e723033957534bc6c3b31bff47c5fdb42a55e43d4cef9-merged.mount: Deactivated successfully. Feb 20 04:29:29 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 20 04:29:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:29:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:29:29 localhost systemd[1]: var-lib-containers-storage-overlay-2830780bc6d16943969f9158fd5036df60ccc26823e26fa259ba0accaa537c16-merged.mount: Deactivated successfully. Feb 20 04:29:29 localhost systemd[1]: tmp-crun.6BK88O.mount: Deactivated successfully. Feb 20 04:29:29 localhost podman[248326]: 2026-02-20 09:29:29.870407297 +0000 UTC m=+0.075220309 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:29:29 localhost podman[248326]: 2026-02-20 09:29:29.880173638 +0000 UTC m=+0.084986620 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:29:29 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:29:29 localhost podman[248325]: 2026-02-20 09:29:29.977571108 +0000 UTC m=+0.181962899 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.7, container_name=openstack_network_exporter, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., release=1770267347) Feb 20 04:29:30 localhost podman[248325]: 2026-02-20 09:29:30.020007511 +0000 UTC m=+0.224399252 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, vcs-type=git, managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, config_id=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter) Feb 20 04:29:30 localhost systemd[1]: var-lib-containers-storage-overlay-2830780bc6d16943969f9158fd5036df60ccc26823e26fa259ba0accaa537c16-merged.mount: Deactivated successfully. Feb 20 04:29:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38435 DF PROTO=TCP SPT=48380 DPT=9100 SEQ=1852599786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B05A4CE0000000001030307) Feb 20 04:29:31 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:29:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:29:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:29:31 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 20 04:29:31 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 20 04:29:31 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:29:31 localhost podman[248362]: 2026-02-20 09:29:31.501234214 +0000 UTC m=+0.260459944 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127) Feb 20 04:29:31 localhost podman[248362]: 2026-02-20 09:29:31.544794106 +0000 UTC m=+0.304019866 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:29:31 localhost podman[248363]: 2026-02-20 09:29:31.559401497 +0000 UTC m=+0.315346980 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:29:31 localhost podman[248363]: 2026-02-20 09:29:31.593834916 +0000 UTC m=+0.349780389 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent) Feb 20 04:29:32 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:32 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:29:32 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:29:32 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:29:32 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:29:33 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:33 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:33 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:33 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38436 DF PROTO=TCP SPT=48380 DPT=9100 SEQ=1852599786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B05B48E0000000001030307) Feb 20 04:29:35 localhost systemd[1]: var-lib-containers-storage-overlay-2830780bc6d16943969f9158fd5036df60ccc26823e26fa259ba0accaa537c16-merged.mount: Deactivated successfully. Feb 20 04:29:35 localhost systemd[1]: var-lib-containers-storage-overlay-02f694049235d5fdc28df409377b9c3a853a57405bcbd0a4d5b6ef2444b51ca4-merged.mount: Deactivated successfully. Feb 20 04:29:35 localhost systemd[1]: var-lib-containers-storage-overlay-02f694049235d5fdc28df409377b9c3a853a57405bcbd0a4d5b6ef2444b51ca4-merged.mount: Deactivated successfully. Feb 20 04:29:36 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:36 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 20 04:29:36 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 20 04:29:37 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46098 DF PROTO=TCP SPT=57536 DPT=9101 SEQ=2510526083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B05BE0D0000000001030307) Feb 20 04:29:37 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:37 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:38 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 20 04:29:38 localhost systemd[1]: var-lib-containers-storage-overlay-b1cf061bfb4d6aba3459a5aa3c06a6d132a0f8d24850ac6b70fc836ee5031ed8-merged.mount: Deactivated successfully. Feb 20 04:29:38 localhost systemd[1]: var-lib-containers-storage-overlay-b1cf061bfb4d6aba3459a5aa3c06a6d132a0f8d24850ac6b70fc836ee5031ed8-merged.mount: Deactivated successfully. Feb 20 04:29:38 localhost sshd[248406]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:29:38 localhost systemd-logind[760]: New session 56 of user zuul. Feb 20 04:29:38 localhost systemd[1]: Started Session 56 of User zuul. Feb 20 04:29:39 localhost python3.9[248502]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:39 localhost python3.9[248612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:29:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19701 DF PROTO=TCP SPT=43628 DPT=9102 SEQ=2249632114 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B05C80D0000000001030307) Feb 20 04:29:40 localhost python3.9[248700]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579779.3237839-3714-218153058325824/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:40 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 20 04:29:40 localhost systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully. Feb 20 04:29:41 localhost systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully. Feb 20 04:29:41 localhost python3.9[248810]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:41 localhost python3.9[248920]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:29:42 localhost python3.9[248977]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38437 DF PROTO=TCP SPT=48380 DPT=9100 SEQ=1852599786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B05D40D0000000001030307) Feb 20 04:29:43 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:29:43 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 20 04:29:43 localhost python3.9[249087]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:29:43 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 20 04:29:43 localhost python3.9[249144]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.n4vztno8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:44 localhost python3.9[249254]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:29:44 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:44 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:29:44 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:29:44 localhost podman[249312]: 2026-02-20 09:29:44.676126033 +0000 UTC m=+0.096097916 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:29:44 localhost podman[249312]: 2026-02-20 09:29:44.71236289 +0000 UTC m=+0.132334773 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:29:44 localhost podman[249312]: unhealthy Feb 20 04:29:44 localhost python3.9[249311]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:45 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:45 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:45 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:45 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Main process exited, code=exited, status=1/FAILURE Feb 20 04:29:45 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Failed with result 'exit-code'. Feb 20 04:29:45 localhost python3.9[249445]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:29:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19703 DF PROTO=TCP SPT=43628 DPT=9102 SEQ=2249632114 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B05DFCD0000000001030307) Feb 20 04:29:46 localhost python3[249556]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Feb 20 04:29:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:29:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 5073 writes, 22K keys, 5073 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5073 writes, 653 syncs, 7.77 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 04:29:47 localhost python3.9[249666]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:29:48 localhost systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully. Feb 20 04:29:48 localhost python3.9[249723]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:48 localhost systemd[1]: var-lib-containers-storage-overlay-46380c3a391236151e6a3f7a86710e438476b0174e52a2400f0bf907fb1c5d80-merged.mount: Deactivated successfully. Feb 20 04:29:48 localhost systemd[1]: var-lib-containers-storage-overlay-46380c3a391236151e6a3f7a86710e438476b0174e52a2400f0bf907fb1c5d80-merged.mount: Deactivated successfully. Feb 20 04:29:48 localhost python3.9[249833]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:29:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34986 DF PROTO=TCP SPT=39842 DPT=9882 SEQ=4229784222 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B05EC0D0000000001030307) Feb 20 04:29:49 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:49 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 20 04:29:49 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 20 04:29:49 localhost python3.9[249890]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:29:50 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:50 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:50 localhost podman[249962]: 2026-02-20 09:29:50.206702017 +0000 UTC m=+0.093297061 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:29:50 localhost podman[249962]: 2026-02-20 09:29:50.223738772 +0000 UTC m=+0.110333736 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:29:50 localhost python3.9[250023]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:29:50 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:29:50 localhost python3.9[250080]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:51 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:51 localhost systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully. Feb 20 04:29:51 localhost systemd[1]: var-lib-containers-storage-overlay-6bb7c653b8391bad6fee73ed98f3b53e3f877ae1d16c587da33487041fbee72e-merged.mount: Deactivated successfully. Feb 20 04:29:51 localhost systemd[1]: var-lib-containers-storage-overlay-6bb7c653b8391bad6fee73ed98f3b53e3f877ae1d16c587da33487041fbee72e-merged.mount: Deactivated successfully. Feb 20 04:29:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:29:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 5513 writes, 24K keys, 5513 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5513 writes, 750 syncs, 7.35 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 04:29:51 localhost python3.9[250190]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:29:52 localhost python3.9[250247]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:52 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:52 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 20 04:29:52 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 20 04:29:52 localhost python3.9[250357]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:29:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34987 DF PROTO=TCP SPT=39842 DPT=9882 SEQ=4229784222 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B05FBCD0000000001030307) Feb 20 04:29:53 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:53 localhost python3.9[250447]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579792.3993883-4089-226367051582706/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:53 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:53 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:54 localhost python3.9[250557]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:54 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 20 04:29:54 localhost systemd[1]: var-lib-containers-storage-overlay-c5ba41a88f6fbd1db9cf9dc9638be96d2c3f12ce29d9389a5988c114e992de06-merged.mount: Deactivated successfully. Feb 20 04:29:54 localhost systemd[1]: var-lib-containers-storage-overlay-c5ba41a88f6fbd1db9cf9dc9638be96d2c3f12ce29d9389a5988c114e992de06-merged.mount: Deactivated successfully. Feb 20 04:29:54 localhost python3.9[250667]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:29:55 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:55 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 20 04:29:55 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 20 04:29:55 localhost python3.9[250780]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:56 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:29:56 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:56 localhost python3.9[250890]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:29:57 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:29:57 localhost python3.9[251001]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:29:57 localhost systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully. Feb 20 04:29:57 localhost systemd[1]: var-lib-containers-storage-overlay-a37e8585e5e1c303b8917b31d26f97b6e36b93752231bf9ec3eb7d712b5a3738-merged.mount: Deactivated successfully. Feb 20 04:29:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3108 DF PROTO=TCP SPT=51106 DPT=9100 SEQ=3447976155 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B060E030000000001030307) Feb 20 04:29:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32116 DF PROTO=TCP SPT=55066 DPT=9105 SEQ=3348638526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B060F210000000001030307) Feb 20 04:29:58 localhost python3.9[251113]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:29:59 localhost openstack_network_exporter[243776]: ERROR 09:29:59 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:29:59 localhost openstack_network_exporter[243776]: Feb 20 04:29:59 localhost openstack_network_exporter[243776]: ERROR 09:29:59 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:29:59 localhost openstack_network_exporter[243776]: Feb 20 04:29:59 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 20 04:29:59 localhost python3.9[251231]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:29:59 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 20 04:29:59 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 20 04:29:59 localhost systemd-logind[760]: Session 56 logged out. Waiting for processes to exit. Feb 20 04:29:59 localhost systemd[1]: session-56.scope: Deactivated successfully. Feb 20 04:29:59 localhost systemd[1]: session-56.scope: Consumed 13.643s CPU time. Feb 20 04:29:59 localhost systemd-logind[760]: Removed session 56. Feb 20 04:30:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:30:00 localhost podman[251249]: 2026-02-20 09:30:00.455550976 +0000 UTC m=+0.089972802 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute) Feb 20 04:30:00 localhost podman[251249]: 2026-02-20 09:30:00.470937933 +0000 UTC m=+0.105359769 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 20 04:30:00 localhost sshd[251268]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:30:01 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:30:01 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 20 04:30:01 localhost systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully. Feb 20 04:30:01 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:30:02 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:30:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:30:02 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:30:02 localhost systemd[1]: tmp-crun.d34KPm.mount: Deactivated successfully. Feb 20 04:30:02 localhost podman[251270]: 2026-02-20 09:30:02.467033058 +0000 UTC m=+0.105402651 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 04:30:02 localhost podman[251270]: 2026-02-20 09:30:02.479229974 +0000 UTC m=+0.117599547 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, maintainer=Red Hat, Inc., architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 20 04:30:02 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:30:02 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:30:03 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:30:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:30:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:30:03 localhost podman[251290]: 2026-02-20 09:30:03.322095976 +0000 UTC m=+0.073113461 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:30:03 localhost podman[251291]: 2026-02-20 09:30:03.335519669 +0000 UTC m=+0.078891117 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 04:30:03 localhost podman[251291]: 2026-02-20 09:30:03.411892911 +0000 UTC m=+0.155264329 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Feb 20 04:30:03 localhost podman[251290]: 2026-02-20 09:30:03.446986129 +0000 UTC m=+0.198003644 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller) Feb 20 04:30:03 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:30:03 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:30:03 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:30:03 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:30:03 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:30:05 localhost systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully. Feb 20 04:30:05 localhost systemd[1]: var-lib-containers-storage-overlay-635be08eaf6b0b44e0a79953079e91b6976bdd54582ce4f8e556143cd0e1a390-merged.mount: Deactivated successfully. Feb 20 04:30:05 localhost systemd[1]: var-lib-containers-storage-overlay-635be08eaf6b0b44e0a79953079e91b6976bdd54582ce4f8e556143cd0e1a390-merged.mount: Deactivated successfully. Feb 20 04:30:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:30:05.895 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:30:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:30:05.896 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:30:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:30:05.896 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:30:06 localhost sshd[251368]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:30:06 localhost systemd-logind[760]: New session 57 of user zuul. Feb 20 04:30:06 localhost systemd[1]: Started Session 57 of User zuul. Feb 20 04:30:07 localhost python3.9[251494]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config/container-startup-config/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:30:07 localhost python3.9[251604]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:30:08 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 20 04:30:08 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 20 04:30:08 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 20 04:30:08 localhost python3.9[251714]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:30:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56509 DF PROTO=TCP SPT=36564 DPT=9102 SEQ=3426407436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B06393D0000000001030307) Feb 20 04:30:09 localhost python3.9[251859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:30:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56510 DF PROTO=TCP SPT=36564 DPT=9102 SEQ=3426407436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B063D4D0000000001030307) Feb 20 04:30:10 localhost python3.9[251945]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579809.0884902-99-263462187437054/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:30:10 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:30:10 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 20 04:30:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19705 DF PROTO=TCP SPT=43628 DPT=9102 SEQ=2249632114 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B06400E0000000001030307) Feb 20 04:30:10 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 20 04:30:11 localhost python3.9[252053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:30:11 localhost python3.9[252139]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579810.45727-99-212300686756238/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:30:11 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:30:11 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:30:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56511 DF PROTO=TCP SPT=36564 DPT=9102 SEQ=3426407436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B06454D0000000001030307) Feb 20 04:30:12 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:30:12 localhost python3.9[252247]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:30:13 localhost python3.9[252333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579811.9932506-99-47061482294335/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=a30cc6270aacd7b2ab48866c36b050e5657a7779 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:30:13 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:30:13 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:30:13 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:30:13 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:30:13 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:30:14 localhost nova_compute[229929]: 2026-02-20 09:30:14.233 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:30:14 localhost python3.9[252441]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:30:14 localhost python3.9[252527]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579813.9699454-273-76706438643548/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=13d630d090b626c2aab1085bca0daa7abb0cabfd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:30:15 localhost nova_compute[229929]: 2026-02-20 09:30:15.229 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:30:15 localhost podman[252622]: 2026-02-20 09:30:15.467804763 +0000 UTC m=+0.102019394 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:30:15 localhost podman[252622]: 2026-02-20 09:30:15.480764632 +0000 UTC m=+0.114979263 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:30:15 localhost podman[252622]: unhealthy Feb 20 04:30:15 localhost python3.9[252641]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:30:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56512 DF PROTO=TCP SPT=36564 DPT=9102 SEQ=3426407436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B06550E0000000001030307) Feb 20 04:30:16 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 20 04:30:16 localhost systemd[1]: var-lib-containers-storage-overlay-341ab3ab9b7393cc993bae384b639c3b914f3b8ebf8717f3f9f4cc78c6224645-merged.mount: Deactivated successfully. Feb 20 04:30:16 localhost python3.9[252769]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:30:16 localhost systemd[1]: var-lib-containers-storage-overlay-341ab3ab9b7393cc993bae384b639c3b914f3b8ebf8717f3f9f4cc78c6224645-merged.mount: Deactivated successfully. Feb 20 04:30:16 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Main process exited, code=exited, status=1/FAILURE Feb 20 04:30:16 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Failed with result 'exit-code'. Feb 20 04:30:17 localhost python3.9[252879]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:30:17 localhost nova_compute[229929]: 2026-02-20 09:30:17.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:30:17 localhost nova_compute[229929]: 2026-02-20 09:30:17.234 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:30:17 localhost nova_compute[229929]: 2026-02-20 09:30:17.234 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:30:17 localhost systemd[1]: var-lib-containers-storage-overlay-e0c81d46f937f1f84faf68fb71f862e4bd868921a7d16384d44790308d98719f-merged.mount: Deactivated successfully. Feb 20 04:30:17 localhost systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully. Feb 20 04:30:17 localhost python3.9[252936]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:30:17 localhost systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully. Feb 20 04:30:17 localhost systemd[1]: var-lib-containers-storage-overlay-e0c81d46f937f1f84faf68fb71f862e4bd868921a7d16384d44790308d98719f-merged.mount: Deactivated successfully. Feb 20 04:30:18 localhost nova_compute[229929]: 2026-02-20 09:30:18.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:30:18 localhost nova_compute[229929]: 2026-02-20 09:30:18.233 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:30:18 localhost python3.9[253046]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:30:18 localhost systemd[1]: var-lib-containers-storage-overlay-e0c81d46f937f1f84faf68fb71f862e4bd868921a7d16384d44790308d98719f-merged.mount: Deactivated successfully. Feb 20 04:30:19 localhost systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully. Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.228 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.245 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.245 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.245 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.255 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.255 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.255 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.272 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.273 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.273 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.273 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.273 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:30:19 localhost python3.9[253103]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.725 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:30:19 localhost python3.9[253235]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.973 229933 WARNING nova.virt.libvirt.driver [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.975 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=13142MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.975 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:30:19 localhost nova_compute[229929]: 2026-02-20 09:30:19.976 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:30:20 localhost nova_compute[229929]: 2026-02-20 09:30:20.041 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:30:20 localhost nova_compute[229929]: 2026-02-20 09:30:20.042 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:30:20 localhost nova_compute[229929]: 2026-02-20 09:30:20.057 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:30:20 localhost nova_compute[229929]: 2026-02-20 09:30:20.494 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:30:20 localhost nova_compute[229929]: 2026-02-20 09:30:20.499 229933 DEBUG nova.compute.provider_tree [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:30:20 localhost nova_compute[229929]: 2026-02-20 09:30:20.511 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:30:20 localhost nova_compute[229929]: 2026-02-20 09:30:20.513 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:30:20 localhost nova_compute[229929]: 2026-02-20 09:30:20.513 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:30:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:30:21 localhost systemd[1]: var-lib-containers-storage-overlay-4e4217686394af7a9122b2b81585c3ad5207fe018f230f20d139fff3e54ac3cc-merged.mount: Deactivated successfully. Feb 20 04:30:21 localhost podman[253367]: 2026-02-20 09:30:21.226796981 +0000 UTC m=+0.088102187 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:30:21 localhost podman[253367]: 2026-02-20 09:30:21.234929393 +0000 UTC m=+0.096234579 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:30:21 localhost systemd[1]: var-lib-containers-storage-overlay-3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976-merged.mount: Deactivated successfully. Feb 20 04:30:21 localhost sshd[253389]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:30:21 localhost python3.9[253368]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:30:21 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:30:21 localhost python3.9[253447]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:30:22 localhost python3.9[253557]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:30:22 localhost python3.9[253614]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:30:23 localhost systemd[1]: var-lib-containers-storage-overlay-33f73751efe606c7233470249b676223e1b26b870cc49c3dbfbe2c7691e9f3fe-merged.mount: Deactivated successfully. Feb 20 04:30:23 localhost systemd[1]: var-lib-containers-storage-overlay-4e4217686394af7a9122b2b81585c3ad5207fe018f230f20d139fff3e54ac3cc-merged.mount: Deactivated successfully. Feb 20 04:30:23 localhost systemd[1]: var-lib-containers-storage-overlay-4e4217686394af7a9122b2b81585c3ad5207fe018f230f20d139fff3e54ac3cc-merged.mount: Deactivated successfully. Feb 20 04:30:23 localhost python3.9[253724]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:30:23 localhost systemd[1]: Reloading. Feb 20 04:30:24 localhost systemd-sysv-generator[253753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:30:24 localhost systemd-rc-local-generator[253747]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56513 DF PROTO=TCP SPT=36564 DPT=9102 SEQ=3426407436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B06760E0000000001030307) Feb 20 04:30:25 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 20 04:30:25 localhost python3.9[253871]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:30:25 localhost systemd[1]: var-lib-containers-storage-overlay-33f73751efe606c7233470249b676223e1b26b870cc49c3dbfbe2c7691e9f3fe-merged.mount: Deactivated successfully. Feb 20 04:30:25 localhost systemd[1]: var-lib-containers-storage-overlay-33f73751efe606c7233470249b676223e1b26b870cc49c3dbfbe2c7691e9f3fe-merged.mount: Deactivated successfully. Feb 20 04:30:25 localhost python3.9[253928]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:30:26 localhost systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully. Feb 20 04:30:26 localhost python3.9[254038]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:30:26 localhost systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully. Feb 20 04:30:27 localhost systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully. Feb 20 04:30:27 localhost systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully. Feb 20 04:30:27 localhost python3.9[254095]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:30:27 localhost systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully. Feb 20 04:30:27 localhost systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully. Feb 20 04:30:28 localhost python3.9[254205]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:30:28 localhost openstack_network_exporter[243776]: ERROR 09:30:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:30:28 localhost openstack_network_exporter[243776]: Feb 20 04:30:28 localhost openstack_network_exporter[243776]: ERROR 09:30:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:30:28 localhost openstack_network_exporter[243776]: Feb 20 04:30:28 localhost systemd[1]: Reloading. Feb 20 04:30:28 localhost systemd-rc-local-generator[254232]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:30:28 localhost systemd-sysv-generator[254235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:28 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:28 localhost systemd[1]: Starting Create netns directory... Feb 20 04:30:28 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 20 04:30:28 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 20 04:30:28 localhost systemd[1]: Finished Create netns directory. Feb 20 04:30:29 localhost python3.9[254356]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:30:29 localhost systemd[1]: var-lib-containers-storage-overlay-3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976-merged.mount: Deactivated successfully. Feb 20 04:30:30 localhost python3.9[254466]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:30:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:30:31 localhost systemd[1]: tmp-crun.ykRYfW.mount: Deactivated successfully. Feb 20 04:30:31 localhost python3.9[254576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:30:31 localhost podman[254577]: 2026-02-20 09:30:31.472790877 +0000 UTC m=+0.104519986 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:30:31 localhost podman[254577]: 2026-02-20 09:30:31.512767854 +0000 UTC m=+0.144496993 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:30:32 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 20 04:30:32 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 20 04:30:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:30:32 localhost python3.9[254681]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579831.000184-708-237913741398982/.source.json _original_basename=.32l6neh4 follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:30:32 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 20 04:30:32 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:30:32 localhost podman[254682]: 2026-02-20 09:30:32.814079279 +0000 UTC m=+0.213595038 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, release=1770267347, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, version=9.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 04:30:32 localhost podman[254682]: 2026-02-20 09:30:32.830091785 +0000 UTC m=+0.229607554 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, release=1770267347) Feb 20 04:30:33 localhost python3.9[254809]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:30:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:30:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:30:34 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:30:34 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 20 04:30:35 localhost systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully. Feb 20 04:30:35 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:30:35 localhost podman[254968]: 2026-02-20 09:30:35.172670228 +0000 UTC m=+0.809395151 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Feb 20 04:30:35 localhost podman[254968]: 2026-02-20 09:30:35.205911813 +0000 UTC m=+0.842636796 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Feb 20 04:30:35 localhost podman[254967]: 2026-02-20 09:30:35.219751797 +0000 UTC m=+0.859214418 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller) Feb 20 04:30:35 localhost podman[254967]: 2026-02-20 09:30:35.269875794 +0000 UTC m=+0.909338435 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 20 04:30:35 localhost python3.9[255156]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False Feb 20 04:30:36 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:30:36 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:30:36 localhost systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully. Feb 20 04:30:36 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:30:36 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:30:36 localhost python3.9[255267]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 20 04:30:37 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:30:37 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:30:37 localhost systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully. Feb 20 04:30:37 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:30:37 localhost systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully. Feb 20 04:30:37 localhost python3[255377]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json containers=['neutron_sriov_agent'] log_base_path=/var/log/containers/stdouts debug=False Feb 20 04:30:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55194 DF PROTO=TCP SPT=47908 DPT=9102 SEQ=481692295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B06AE6E0000000001030307) Feb 20 04:30:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55195 DF PROTO=TCP SPT=47908 DPT=9102 SEQ=481692295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B06B28E0000000001030307) Feb 20 04:30:40 localhost systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully. Feb 20 04:30:40 localhost systemd[1]: var-lib-containers-storage-overlay-cf0641ad9d6432e8918b929f405ec6680a89f0297b971ef3a2ed74fd2cbbe9db-merged.mount: Deactivated successfully. Feb 20 04:30:40 localhost podman[241347]: @ - - [20/Feb/2026:09:26:01 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 141014 "" "Go-http-client/1.1" Feb 20 04:30:40 localhost podman_exporter[241335]: ts=2026-02-20T09:30:40.755Z caller=exporter.go:96 level=info msg="Listening on" address=:9882 Feb 20 04:30:40 localhost podman_exporter[241335]: ts=2026-02-20T09:30:40.756Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882 Feb 20 04:30:40 localhost podman_exporter[241335]: ts=2026-02-20T09:30:40.756Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882 Feb 20 04:30:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56514 DF PROTO=TCP SPT=36564 DPT=9102 SEQ=3426407436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B06B60E0000000001030307) Feb 20 04:30:40 localhost podman[255412]: Feb 20 04:30:40 localhost podman[255412]: 2026-02-20 09:30:40.990479629 +0000 UTC m=+0.094832820 container create 95f2db5d3e0ed0ffb77af294feba81413f23fe2606dda40cf0066e2bfdd75616 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-fae09e3faa7c6f34af841badd92fc05c71e712fad2532dfba4f12b647c9062fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:30:40 localhost podman[255412]: 2026-02-20 09:30:40.947433404 +0000 UTC m=+0.051786605 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Feb 20 04:30:40 localhost python3[255377]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-fae09e3faa7c6f34af841badd92fc05c71e712fad2532dfba4f12b647c9062fd --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-fae09e3faa7c6f34af841badd92fc05c71e712fad2532dfba4f12b647c9062fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Feb 20 04:30:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55196 DF PROTO=TCP SPT=47908 DPT=9102 SEQ=481692295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B06BA8D0000000001030307) Feb 20 04:30:42 localhost python3.9[255558]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:30:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19706 DF PROTO=TCP SPT=43628 DPT=9102 SEQ=2249632114 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B06BE0D0000000001030307) Feb 20 04:30:43 localhost python3.9[255670]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:30:44 localhost python3.9[255725]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:30:44 localhost python3.9[255834]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771579844.157133-942-229158720051905/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:30:45 localhost python3.9[255889]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 04:30:45 localhost systemd[1]: Reloading. Feb 20 04:30:45 localhost systemd-sysv-generator[255919]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:30:45 localhost systemd-rc-local-generator[255916]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:30:45 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:45 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:30:45 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:45 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:45 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:45 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55197 DF PROTO=TCP SPT=47908 DPT=9102 SEQ=481692295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B06CA4D0000000001030307) Feb 20 04:30:46 localhost podman[241347]: time="2026-02-20T09:30:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:30:46 localhost podman[241347]: @ - - [20/Feb/2026:09:30:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144998 "" "Go-http-client/1.1" Feb 20 04:30:46 localhost podman[241347]: @ - - [20/Feb/2026:09:30:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15749 "" "Go-http-client/1.1" Feb 20 04:30:46 localhost python3.9[255979]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:30:46 localhost systemd[1]: Reloading. Feb 20 04:30:46 localhost systemd-rc-local-generator[256009]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:30:46 localhost systemd-sysv-generator[256013]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:30:46 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:46 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:46 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:46 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:30:46 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:46 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:46 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:46 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:30:46 localhost systemd[1]: Starting neutron_sriov_agent container... Feb 20 04:30:46 localhost systemd[1]: tmp-crun.u5XVze.mount: Deactivated successfully. Feb 20 04:30:46 localhost podman[256022]: 2026-02-20 09:30:46.92465029 +0000 UTC m=+0.066450561 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:30:46 localhost podman[256022]: 2026-02-20 09:30:46.959825742 +0000 UTC m=+0.101626043 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:30:46 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:30:47 localhost systemd[1]: Started libcrun container. Feb 20 04:30:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0e13aeacc3d708e4ade664fe6cb98ac4ee926541a7f78d1e83ad854752649e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 20 04:30:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0e13aeacc3d708e4ade664fe6cb98ac4ee926541a7f78d1e83ad854752649e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:30:47 localhost podman[256035]: 2026-02-20 09:30:47.013315233 +0000 UTC m=+0.124054300 container init 95f2db5d3e0ed0ffb77af294feba81413f23fe2606dda40cf0066e2bfdd75616 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-fae09e3faa7c6f34af841badd92fc05c71e712fad2532dfba4f12b647c9062fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:30:47 localhost podman[256035]: 2026-02-20 09:30:47.022173075 +0000 UTC m=+0.132912142 container start 95f2db5d3e0ed0ffb77af294feba81413f23fe2606dda40cf0066e2bfdd75616 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-fae09e3faa7c6f34af841badd92fc05c71e712fad2532dfba4f12b647c9062fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:30:47 localhost podman[256035]: neutron_sriov_agent Feb 20 04:30:47 localhost systemd[1]: Started neutron_sriov_agent container. Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: + sudo -E kolla_set_configs Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Validating config file Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Copying service configuration files Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Writing out command to execute Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Setting permission for /var/lib/neutron Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: ++ cat /run_command Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: + CMD=/usr/bin/neutron-sriov-nic-agent Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: + ARGS= Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: + sudo kolla_copy_cacerts Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: + [[ ! -n '' ]] Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: + . kolla_extend_start Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: Running command: '/usr/bin/neutron-sriov-nic-agent' Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: + umask 0022 Feb 20 04:30:47 localhost neutron_sriov_agent[256061]: + exec /usr/bin/neutron-sriov-nic-agent Feb 20 04:30:47 localhost python3.9[256183]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 20 04:30:48 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:48.909 2 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 20 04:30:48 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:48.910 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev44#033[00m Feb 20 04:30:48 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:48.910 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Feb 20 04:30:48 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:48.910 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Feb 20 04:30:48 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:48.910 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Feb 20 04:30:48 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:48.910 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Feb 20 04:30:48 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:48.911 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005625202.localdomain'}#033[00m Feb 20 04:30:48 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:48.911 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-f71cd0d7-a6cf-4b0d-810a-48c7ac23f535 - - - - - -] RPC agent_id: nic-switch-agent.np0005625202.localdomain#033[00m Feb 20 04:30:48 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:48.916 2 INFO neutron.agent.agent_extensions_manager [None req-f71cd0d7-a6cf-4b0d-810a-48c7ac23f535 - - - - - -] Loaded agent extensions: ['qos']#033[00m Feb 20 04:30:48 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:48.917 2 INFO neutron.agent.agent_extensions_manager [None req-f71cd0d7-a6cf-4b0d-810a-48c7ac23f535 - - - - - -] Initializing agent extension 'qos'#033[00m Feb 20 04:30:49 localhost python3.9[256294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:30:49 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:49.217 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-f71cd0d7-a6cf-4b0d-810a-48c7ac23f535 - - - - - -] Agent initialized successfully, now running... #033[00m Feb 20 04:30:49 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:49.218 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-f71cd0d7-a6cf-4b0d-810a-48c7ac23f535 - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Feb 20 04:30:49 localhost neutron_sriov_agent[256061]: 2026-02-20 09:30:49.219 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-f71cd0d7-a6cf-4b0d-810a-48c7ac23f535 - - - - - -] Agent out of sync with plugin!#033[00m Feb 20 04:30:49 localhost python3.9[256384]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579848.6611013-1077-147373155164905/.source.yaml _original_basename=.jauqbbfo follow=False checksum=9a7aca9285be233ff868b04cb9ff99cde755c904 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:30:50 localhost python3.9[256494]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:30:50 localhost systemd[1]: Stopping neutron_sriov_agent container... Feb 20 04:30:50 localhost systemd[1]: tmp-crun.Z5hmtF.mount: Deactivated successfully. Feb 20 04:30:50 localhost sshd[256510]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:30:50 localhost systemd[1]: libpod-95f2db5d3e0ed0ffb77af294feba81413f23fe2606dda40cf0066e2bfdd75616.scope: Deactivated successfully. Feb 20 04:30:50 localhost systemd[1]: libpod-95f2db5d3e0ed0ffb77af294feba81413f23fe2606dda40cf0066e2bfdd75616.scope: Consumed 1.942s CPU time. Feb 20 04:30:50 localhost podman[256498]: 2026-02-20 09:30:50.882828101 +0000 UTC m=+0.096511816 container died 95f2db5d3e0ed0ffb77af294feba81413f23fe2606dda40cf0066e2bfdd75616 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-fae09e3faa7c6f34af841badd92fc05c71e712fad2532dfba4f12b647c9062fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent) Feb 20 04:30:50 localhost systemd[1]: tmp-crun.aSo6kC.mount: Deactivated successfully. Feb 20 04:30:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-95f2db5d3e0ed0ffb77af294feba81413f23fe2606dda40cf0066e2bfdd75616-userdata-shm.mount: Deactivated successfully. Feb 20 04:30:50 localhost podman[256498]: 2026-02-20 09:30:50.945349941 +0000 UTC m=+0.159033636 container cleanup 95f2db5d3e0ed0ffb77af294feba81413f23fe2606dda40cf0066e2bfdd75616 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-fae09e3faa7c6f34af841badd92fc05c71e712fad2532dfba4f12b647c9062fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=neutron_sriov_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:30:50 localhost podman[256498]: neutron_sriov_agent Feb 20 04:30:51 localhost podman[256525]: 2026-02-20 09:30:51.033224931 +0000 UTC m=+0.058810025 container cleanup 95f2db5d3e0ed0ffb77af294feba81413f23fe2606dda40cf0066e2bfdd75616 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_sriov_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-fae09e3faa7c6f34af841badd92fc05c71e712fad2532dfba4f12b647c9062fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:30:51 localhost podman[256525]: neutron_sriov_agent Feb 20 04:30:51 localhost systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully. Feb 20 04:30:51 localhost systemd[1]: Stopped neutron_sriov_agent container. Feb 20 04:30:51 localhost systemd[1]: Starting neutron_sriov_agent container... Feb 20 04:30:51 localhost systemd[1]: Started libcrun container. Feb 20 04:30:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0e13aeacc3d708e4ade664fe6cb98ac4ee926541a7f78d1e83ad854752649e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 20 04:30:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0e13aeacc3d708e4ade664fe6cb98ac4ee926541a7f78d1e83ad854752649e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:30:51 localhost podman[256537]: 2026-02-20 09:30:51.175261632 +0000 UTC m=+0.114557100 container init 95f2db5d3e0ed0ffb77af294feba81413f23fe2606dda40cf0066e2bfdd75616 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-fae09e3faa7c6f34af841badd92fc05c71e712fad2532dfba4f12b647c9062fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 20 04:30:51 localhost podman[256537]: 2026-02-20 09:30:51.184301239 +0000 UTC m=+0.123596707 container start 95f2db5d3e0ed0ffb77af294feba81413f23fe2606dda40cf0066e2bfdd75616 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-fae09e3faa7c6f34af841badd92fc05c71e712fad2532dfba4f12b647c9062fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}) Feb 20 04:30:51 localhost podman[256537]: neutron_sriov_agent Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: + sudo -E kolla_set_configs Feb 20 04:30:51 localhost systemd[1]: Started neutron_sriov_agent container. Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Validating config file Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Copying service configuration files Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Writing out command to execute Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /var/lib/neutron Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/d91d8a949a4b5272256c667b5094a15f5e397c6793efbfa4186752b765c6923b Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: ++ cat /run_command Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: + CMD=/usr/bin/neutron-sriov-nic-agent Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: + ARGS= Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: + sudo kolla_copy_cacerts Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: + [[ ! -n '' ]] Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: + . kolla_extend_start Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: Running command: '/usr/bin/neutron-sriov-nic-agent' Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: + umask 0022 Feb 20 04:30:51 localhost neutron_sriov_agent[256551]: + exec /usr/bin/neutron-sriov-nic-agent Feb 20 04:30:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:30:51 localhost systemd[1]: tmp-crun.G19nGV.mount: Deactivated successfully. Feb 20 04:30:51 localhost podman[256583]: 2026-02-20 09:30:51.948896064 +0000 UTC m=+0.090877077 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:30:51 localhost podman[256583]: 2026-02-20 09:30:51.981897423 +0000 UTC m=+0.123878436 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:30:51 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:30:52 localhost systemd[1]: session-57.scope: Deactivated successfully. Feb 20 04:30:52 localhost systemd[1]: session-57.scope: Consumed 24.320s CPU time. Feb 20 04:30:52 localhost systemd-logind[760]: Session 57 logged out. Waiting for processes to exit. Feb 20 04:30:52 localhost systemd-logind[760]: Removed session 57. Feb 20 04:30:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:52.921 2 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 20 04:30:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:52.921 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev44#033[00m Feb 20 04:30:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:52.922 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Feb 20 04:30:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:52.922 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Feb 20 04:30:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:52.922 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Feb 20 04:30:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:52.922 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Feb 20 04:30:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:52.922 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005625202.localdomain'}#033[00m Feb 20 04:30:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:52.923 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-44f14d05-b210-4779-a75c-80bdec28de1b - - - - - -] RPC agent_id: nic-switch-agent.np0005625202.localdomain#033[00m Feb 20 04:30:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:52.928 2 INFO neutron.agent.agent_extensions_manager [None req-44f14d05-b210-4779-a75c-80bdec28de1b - - - - - -] Loaded agent extensions: ['qos']#033[00m Feb 20 04:30:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:52.928 2 INFO neutron.agent.agent_extensions_manager [None req-44f14d05-b210-4779-a75c-80bdec28de1b - - - - - -] Initializing agent extension 'qos'#033[00m Feb 20 04:30:53 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:53.056 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-44f14d05-b210-4779-a75c-80bdec28de1b - - - - - -] Agent initialized successfully, now running... #033[00m Feb 20 04:30:53 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:53.056 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-44f14d05-b210-4779-a75c-80bdec28de1b - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Feb 20 04:30:53 localhost neutron_sriov_agent[256551]: 2026-02-20 09:30:53.056 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-44f14d05-b210-4779-a75c-80bdec28de1b - - - - - -] Agent out of sync with plugin!#033[00m Feb 20 04:30:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55198 DF PROTO=TCP SPT=47908 DPT=9102 SEQ=481692295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B06EA0D0000000001030307) Feb 20 04:30:57 localhost sshd[256607]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:30:57 localhost systemd-logind[760]: New session 58 of user zuul. Feb 20 04:30:57 localhost systemd[1]: Started Session 58 of User zuul. Feb 20 04:30:58 localhost openstack_network_exporter[243776]: ERROR 09:30:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:30:58 localhost openstack_network_exporter[243776]: Feb 20 04:30:58 localhost openstack_network_exporter[243776]: ERROR 09:30:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:30:58 localhost openstack_network_exporter[243776]: Feb 20 04:30:58 localhost python3.9[256718]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:30:59 localhost sshd[256740]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:30:59 localhost python3.9[256833]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:31:00 localhost python3.9[256896]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:31:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:31:03 localhost systemd[1]: tmp-crun.K3JjT5.mount: Deactivated successfully. Feb 20 04:31:03 localhost podman[256899]: 2026-02-20 09:31:03.466433069 +0000 UTC m=+0.103464815 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true) Feb 20 04:31:03 localhost podman[256899]: 2026-02-20 09:31:03.476692141 +0000 UTC m=+0.113723897 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:31:03 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:31:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:31:05 localhost podman[256974]: 2026-02-20 09:31:05.46447926 +0000 UTC m=+0.099758860 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, release=1770267347) Feb 20 04:31:05 localhost podman[256974]: 2026-02-20 09:31:05.480671469 +0000 UTC m=+0.115951069 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, container_name=openstack_network_exporter, release=1770267347, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 20 04:31:05 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:31:05 localhost python3.9[257048]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Feb 20 04:31:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:31:05.898 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:31:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:31:05.900 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:31:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:31:05.900 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:31:07 localhost podman[257052]: 2026-02-20 09:31:07.006844719 +0000 UTC m=+0.098964399 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:31:07 localhost systemd[1]: tmp-crun.ni7zUi.mount: Deactivated successfully. Feb 20 04:31:07 localhost podman[257053]: 2026-02-20 09:31:07.088623701 +0000 UTC m=+0.175846429 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Feb 20 04:31:07 localhost podman[257053]: 2026-02-20 09:31:07.093312125 +0000 UTC m=+0.180534883 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 20 04:31:07 localhost podman[257052]: 2026-02-20 09:31:07.093787938 +0000 UTC m=+0.185907617 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible) Feb 20 04:31:07 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:31:07 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:31:07 localhost python3.9[257204]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/container-startup-config setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:08 localhost python3.9[257314]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31399 DF PROTO=TCP SPT=51984 DPT=9102 SEQ=1003123831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B07239D0000000001030307) Feb 20 04:31:09 localhost python3.9[257424]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31400 DF PROTO=TCP SPT=51984 DPT=9102 SEQ=1003123831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B07278E0000000001030307) Feb 20 04:31:10 localhost python3.9[257570]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55199 DF PROTO=TCP SPT=47908 DPT=9102 SEQ=481692295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B072A0D0000000001030307) Feb 20 04:31:10 localhost python3.9[257713]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:11 localhost python3.9[257841]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31401 DF PROTO=TCP SPT=51984 DPT=9102 SEQ=1003123831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B072F8D0000000001030307) Feb 20 04:31:12 localhost python3.9[257951]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:12 localhost python3.9[258061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/container-startup-config/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56515 DF PROTO=TCP SPT=36564 DPT=9102 SEQ=3426407436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B07340D0000000001030307) Feb 20 04:31:13 localhost python3.9[258149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/container-startup-config/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579872.1920211-273-42335230256969/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=472c5e922ae22c8bdcaef73d1ca73ce5597b440e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:14 localhost python3.9[258257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:15 localhost python3.9[258343]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579873.6728282-318-197614239882615/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:15 localhost python3.9[258451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31402 DF PROTO=TCP SPT=51984 DPT=9102 SEQ=1003123831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B073F4D0000000001030307) Feb 20 04:31:16 localhost podman[241347]: time="2026-02-20T09:31:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:31:16 localhost podman[241347]: @ - - [20/Feb/2026:09:31:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144997 "" "Go-http-client/1.1" Feb 20 04:31:16 localhost podman[241347]: @ - - [20/Feb/2026:09:31:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15876 "" "Go-http-client/1.1" Feb 20 04:31:16 localhost python3.9[258537]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579875.3425126-318-241035409007588/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.257 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.258 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.258 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.258 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:31:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:31:17 localhost python3.9[258645]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:31:17 localhost systemd[1]: tmp-crun.gvI9tz.mount: Deactivated successfully. Feb 20 04:31:17 localhost podman[258657]: 2026-02-20 09:31:17.45364589 +0000 UTC m=+0.091050268 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:31:17 localhost podman[258657]: 2026-02-20 09:31:17.491781723 +0000 UTC m=+0.129186071 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:31:17 localhost nova_compute[229929]: 2026-02-20 09:31:17.490 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:31:17 localhost nova_compute[229929]: 2026-02-20 09:31:17.491 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:31:17 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:31:17 localhost python3.9[258754]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579876.8196106-318-82222956538946/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=a5af2c0a6af1e922595d50ca626b15c1a0f8d5b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:19 localhost python3.9[258862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:19 localhost nova_compute[229929]: 2026-02-20 09:31:19.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:31:19 localhost nova_compute[229929]: 2026-02-20 09:31:19.233 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:31:19 localhost nova_compute[229929]: 2026-02-20 09:31:19.233 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:31:19 localhost nova_compute[229929]: 2026-02-20 09:31:19.256 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:31:19 localhost nova_compute[229929]: 2026-02-20 09:31:19.257 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:31:19 localhost nova_compute[229929]: 2026-02-20 09:31:19.258 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:31:19 localhost nova_compute[229929]: 2026-02-20 09:31:19.258 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:31:19 localhost nova_compute[229929]: 2026-02-20 09:31:19.259 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:31:19 localhost nova_compute[229929]: 2026-02-20 09:31:19.259 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:31:19 localhost python3.9[258948]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579878.7209551-492-105458891373329/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=13d630d090b626c2aab1085bca0daa7abb0cabfd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:20 localhost nova_compute[229929]: 2026-02-20 09:31:20.233 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:31:20 localhost python3.9[259056]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:20 localhost python3.9[259142]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579879.8350108-537-207261081517633/.source follow=False _original_basename=haproxy.j2 checksum=eddfecb822bb60e7241db0fd719c7552d2d25452 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:21 localhost nova_compute[229929]: 2026-02-20 09:31:21.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:31:21 localhost nova_compute[229929]: 2026-02-20 09:31:21.256 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:31:21 localhost nova_compute[229929]: 2026-02-20 09:31:21.257 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:31:21 localhost nova_compute[229929]: 2026-02-20 09:31:21.257 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:31:21 localhost nova_compute[229929]: 2026-02-20 09:31:21.258 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:31:21 localhost nova_compute[229929]: 2026-02-20 09:31:21.258 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:31:21 localhost python3.9[259250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:21 localhost nova_compute[229929]: 2026-02-20 09:31:21.717 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:31:21 localhost python3.9[259356]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579880.98752-537-100986499045000/.source follow=False _original_basename=dnsmasq.j2 checksum=a6b8b2fb47e7419d250eaee9e3565b13fff8f42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:21 localhost nova_compute[229929]: 2026-02-20 09:31:21.926 229933 WARNING nova.virt.libvirt.driver [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:31:21 localhost nova_compute[229929]: 2026-02-20 09:31:21.927 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=12921MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:31:21 localhost nova_compute[229929]: 2026-02-20 09:31:21.928 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:31:21 localhost nova_compute[229929]: 2026-02-20 09:31:21.928 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:31:22 localhost nova_compute[229929]: 2026-02-20 09:31:22.004 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:31:22 localhost nova_compute[229929]: 2026-02-20 09:31:22.005 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:31:22 localhost nova_compute[229929]: 2026-02-20 09:31:22.039 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:31:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:31:22 localhost podman[259450]: 2026-02-20 09:31:22.444094387 +0000 UTC m=+0.085534552 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:31:22 localhost podman[259450]: 2026-02-20 09:31:22.452371827 +0000 UTC m=+0.093812012 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:31:22 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:31:22 localhost nova_compute[229929]: 2026-02-20 09:31:22.514 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:31:22 localhost nova_compute[229929]: 2026-02-20 09:31:22.521 229933 DEBUG nova.compute.provider_tree [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:31:22 localhost nova_compute[229929]: 2026-02-20 09:31:22.535 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:31:22 localhost nova_compute[229929]: 2026-02-20 09:31:22.538 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:31:22 localhost nova_compute[229929]: 2026-02-20 09:31:22.538 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:31:22 localhost python3.9[259508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:23 localhost python3.9[259565]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:23 localhost sshd[259605]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:31:23 localhost python3.9[259675]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:24 localhost sshd[259692]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:31:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31403 DF PROTO=TCP SPT=51984 DPT=9102 SEQ=1003123831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B07600D0000000001030307) Feb 20 04:31:24 localhost python3.9[259763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579883.2280483-624-162618834615637/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:25 localhost python3.9[259871]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:31:26 localhost python3.9[259983]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:27 localhost python3.9[260093]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:28 localhost python3.9[260150]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:28 localhost openstack_network_exporter[243776]: ERROR 09:31:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:31:28 localhost openstack_network_exporter[243776]: Feb 20 04:31:28 localhost openstack_network_exporter[243776]: ERROR 09:31:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:31:28 localhost openstack_network_exporter[243776]: Feb 20 04:31:28 localhost python3.9[260260]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:29 localhost python3.9[260317]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:29 localhost python3.9[260427]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:31:30 localhost sshd[260499]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:31:30 localhost python3.9[260539]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:30 localhost python3.9[260596]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:31:31 localhost python3.9[260706]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:31 localhost python3.9[260763]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:31:32 localhost python3.9[260873]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:31:32 localhost systemd[1]: Reloading. Feb 20 04:31:32 localhost systemd-rc-local-generator[260897]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:31:32 localhost systemd-sysv-generator[260902]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:32 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:33 localhost python3.9[261021]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:31:34 localhost systemd[1]: tmp-crun.iOmJds.mount: Deactivated successfully. Feb 20 04:31:34 localhost podman[261078]: 2026-02-20 09:31:34.023689524 +0000 UTC m=+0.093227036 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 20 04:31:34 localhost podman[261078]: 2026-02-20 09:31:34.036847213 +0000 UTC m=+0.106384775 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 20 04:31:34 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:31:34 localhost python3.9[261079]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:31:34 localhost python3.9[261207]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:35 localhost python3.9[261264]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:31:36 localhost sshd[261336]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:31:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:31:36 localhost systemd[1]: tmp-crun.iLCz2B.mount: Deactivated successfully. Feb 20 04:31:36 localhost podman[261377]: 2026-02-20 09:31:36.29590205 +0000 UTC m=+0.088711926 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_id=openstack_network_exporter, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.7, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 20 04:31:36 localhost podman[261377]: 2026-02-20 09:31:36.313810926 +0000 UTC m=+0.106620802 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.) Feb 20 04:31:36 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:31:36 localhost sshd[261397]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:31:36 localhost python3.9[261376]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:31:36 localhost systemd[1]: Reloading. Feb 20 04:31:36 localhost systemd-rc-local-generator[261421]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:31:36 localhost systemd-sysv-generator[261427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:36 localhost systemd[1]: Starting Create netns directory... Feb 20 04:31:36 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Feb 20 04:31:36 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Feb 20 04:31:36 localhost systemd[1]: Finished Create netns directory. Feb 20 04:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:31:37 localhost podman[261440]: 2026-02-20 09:31:37.272327644 +0000 UTC m=+0.092987739 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:31:37 localhost podman[261441]: 2026-02-20 09:31:37.326148103 +0000 UTC m=+0.143526851 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 04:31:37 localhost podman[261440]: 2026-02-20 09:31:37.351876746 +0000 UTC m=+0.172536821 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:31:37 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:31:37 localhost podman[261441]: 2026-02-20 09:31:37.406834345 +0000 UTC m=+0.224213083 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:31:37 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:31:38 localhost python3.9[261591]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:31:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30383 DF PROTO=TCP SPT=58792 DPT=9102 SEQ=2627847181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0798CD0000000001030307) Feb 20 04:31:39 localhost python3.9[261701]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:31:39 localhost python3.9[261811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30384 DF PROTO=TCP SPT=58792 DPT=9102 SEQ=2627847181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B079CCD0000000001030307) Feb 20 04:31:40 localhost python3.9[261899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579899.3202088-1092-37469703577167/.source.json _original_basename=.9182wev0 follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:31:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31404 DF PROTO=TCP SPT=51984 DPT=9102 SEQ=1003123831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B07A00D0000000001030307) Feb 20 04:31:40 localhost python3.9[262007]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:31:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30385 DF PROTO=TCP SPT=58792 DPT=9102 SEQ=2627847181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B07A4CD0000000001030307) Feb 20 04:31:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55200 DF PROTO=TCP SPT=47908 DPT=9102 SEQ=481692295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B07A80D0000000001030307) Feb 20 04:31:43 localhost python3.9[262311]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False Feb 20 04:31:44 localhost python3.9[262421]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 20 04:31:45 localhost python3[262531]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json containers=['neutron_dhcp_agent'] log_base_path=/var/log/containers/stdouts debug=False Feb 20 04:31:45 localhost podman[262571]: Feb 20 04:31:45 localhost podman[262571]: 2026-02-20 09:31:45.618876063 +0000 UTC m=+0.091342565 container create d141bb465dddebfd6ac5bf68b38c6da9889f58c5c6daa2b50021d05c6d36690e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, container_name=neutron_dhcp_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-3cda1ec72102a56bb4ad0af8d929784a95d02067deac702e246f88f07c92ba34'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=neutron_dhcp, io.buildah.version=1.41.3) Feb 20 04:31:45 localhost podman[262571]: 2026-02-20 09:31:45.5705323 +0000 UTC m=+0.042998852 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:31:45 localhost python3[262531]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-3cda1ec72102a56bb4ad0af8d929784a95d02067deac702e246f88f07c92ba34 --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-3cda1ec72102a56bb4ad0af8d929784a95d02067deac702e246f88f07c92ba34'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:31:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30386 DF PROTO=TCP SPT=58792 DPT=9102 SEQ=2627847181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B07B48D0000000001030307) Feb 20 04:31:46 localhost podman[241347]: time="2026-02-20T09:31:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:31:46 localhost podman[241347]: @ - - [20/Feb/2026:09:31:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147346 "" "Go-http-client/1.1" Feb 20 04:31:46 localhost podman[241347]: @ - - [20/Feb/2026:09:31:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16191 "" "Go-http-client/1.1" Feb 20 04:31:46 localhost python3.9[262718]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:31:47 localhost python3.9[262830]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:31:48 localhost systemd[1]: tmp-crun.qxcNaa.mount: Deactivated successfully. Feb 20 04:31:48 localhost podman[262886]: 2026-02-20 09:31:48.206964707 +0000 UTC m=+0.078433053 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:31:48 localhost podman[262886]: 2026-02-20 09:31:48.217718473 +0000 UTC m=+0.089186759 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:31:48 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:31:48 localhost python3.9[262885]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:31:48 localhost python3.9[263017]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771579908.358919-1326-1337571128884/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:31:49 localhost python3.9[263072]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 04:31:49 localhost systemd[1]: Reloading. Feb 20 04:31:49 localhost systemd-rc-local-generator[263098]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:31:49 localhost systemd-sysv-generator[263103]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:31:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:31:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:50 localhost python3.9[263163]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:31:50 localhost systemd[1]: Reloading. Feb 20 04:31:50 localhost systemd-rc-local-generator[263188]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:31:50 localhost systemd-sysv-generator[263193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:31:50 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:50 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:50 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:50 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:31:50 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:50 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:50 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:50 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:31:50 localhost systemd[1]: Starting neutron_dhcp_agent container... Feb 20 04:31:50 localhost systemd[1]: Started libcrun container. Feb 20 04:31:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649f8291ea12fe57a6c5c2192bae3d76cd72cd06828ac0906b67d1038dab6c49/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 20 04:31:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649f8291ea12fe57a6c5c2192bae3d76cd72cd06828ac0906b67d1038dab6c49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:31:50 localhost podman[263203]: 2026-02-20 09:31:50.975823401 +0000 UTC m=+0.109281513 container init d141bb465dddebfd6ac5bf68b38c6da9889f58c5c6daa2b50021d05c6d36690e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, org.label-schema.build-date=20260127, tcib_managed=true, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-3cda1ec72102a56bb4ad0af8d929784a95d02067deac702e246f88f07c92ba34'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 20 04:31:50 localhost podman[263203]: 2026-02-20 09:31:50.985447536 +0000 UTC m=+0.118905628 container start d141bb465dddebfd6ac5bf68b38c6da9889f58c5c6daa2b50021d05c6d36690e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, container_name=neutron_dhcp_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-3cda1ec72102a56bb4ad0af8d929784a95d02067deac702e246f88f07c92ba34'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 20 04:31:50 localhost podman[263203]: neutron_dhcp_agent Feb 20 04:31:50 localhost systemd[1]: Started neutron_dhcp_agent container. Feb 20 04:31:50 localhost neutron_dhcp_agent[263216]: + sudo -E kolla_set_configs Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Validating config file Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Copying service configuration files Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Writing out command to execute Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/d91d8a949a4b5272256c667b5094a15f5e397c6793efbfa4186752b765c6923b Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: ++ cat /run_command Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: + CMD=/usr/bin/neutron-dhcp-agent Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: + ARGS= Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: + sudo kolla_copy_cacerts Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: + [[ ! -n '' ]] Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: + . kolla_extend_start Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: Running command: '/usr/bin/neutron-dhcp-agent' Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: + umask 0022 Feb 20 04:31:51 localhost neutron_dhcp_agent[263216]: + exec /usr/bin/neutron-dhcp-agent Feb 20 04:31:52 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:52.239 263220 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 20 04:31:52 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:52.240 263220 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev44#033[00m Feb 20 04:31:52 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:52.597 263220 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Feb 20 04:31:52 localhost python3.9[263339]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 20 04:31:53 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:53.281 263220 INFO neutron.agent.dhcp.agent [None req-8e21b952-bba1-4619-aa08-2c211c39d2e9 - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 20 04:31:53 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:53.282 263220 INFO neutron.agent.dhcp.agent [-] Starting network 84efa4de-646c-469c-b16c-6ab7c3e948cf dhcp configuration#033[00m Feb 20 04:31:53 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:53.332 263220 INFO neutron.agent.dhcp.agent [-] Starting network de929a91-c460-4398-96e0-15a80685a485 dhcp configuration#033[00m Feb 20 04:31:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:31:53 localhost systemd[1]: tmp-crun.B64I6E.mount: Deactivated successfully. Feb 20 04:31:53 localhost podman[263358]: 2026-02-20 09:31:53.45228073 +0000 UTC m=+0.090543746 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:31:53 localhost podman[263358]: 2026-02-20 09:31:53.466793235 +0000 UTC m=+0.105056281 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:31:53 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:31:53 localhost python3.9[263471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:31:53 localhost ovn_metadata_agent[161761]: 2026-02-20 09:31:53.916 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:31:53 localhost ovn_metadata_agent[161761]: 2026-02-20 09:31:53.918 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 04:31:53 localhost ovn_metadata_agent[161761]: 2026-02-20 09:31:53.918 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:31:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30387 DF PROTO=TCP SPT=58792 DPT=9102 SEQ=2627847181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B07D40E0000000001030307) Feb 20 04:31:54 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:54.311 263220 INFO oslo.privsep.daemon [None req-ebadff2b-4c41-4d7c-95af-24ab6684cac5 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpsk8nnso3/privsep.sock']#033[00m Feb 20 04:31:54 localhost python3.9[263561]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579913.4122853-1461-72573176429004/.source.yaml _original_basename=.lbtxctdd follow=False checksum=b9ca88bcb32671aca7ddecc5a041bae0cf925d73 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:31:54 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:54.947 263220 INFO oslo.privsep.daemon [None req-ebadff2b-4c41-4d7c-95af-24ab6684cac5 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Feb 20 04:31:54 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:54.826 263637 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 20 04:31:54 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:54.831 263637 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 20 04:31:54 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:54.834 263637 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Feb 20 04:31:54 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:54.835 263637 INFO oslo.privsep.daemon [-] privsep daemon running as pid 263637#033[00m Feb 20 04:31:54 localhost neutron_dhcp_agent[263216]: 2026-02-20 09:31:54.950 263220 WARNING oslo_privsep.priv_context [None req-c2799074-bfbf-45c0-8f61-9720826e6681 - - - - - -] privsep daemon already running#033[00m Feb 20 04:31:55 localhost python3.9[263680]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:31:55 localhost systemd[1]: Stopping neutron_dhcp_agent container... Feb 20 04:31:55 localhost systemd[1]: tmp-crun.swDmIn.mount: Deactivated successfully. Feb 20 04:31:55 localhost systemd[1]: libpod-d141bb465dddebfd6ac5bf68b38c6da9889f58c5c6daa2b50021d05c6d36690e.scope: Deactivated successfully. Feb 20 04:31:55 localhost systemd[1]: libpod-d141bb465dddebfd6ac5bf68b38c6da9889f58c5c6daa2b50021d05c6d36690e.scope: Consumed 3.145s CPU time. Feb 20 04:31:55 localhost podman[263685]: 2026-02-20 09:31:55.755758357 +0000 UTC m=+0.421094501 container died d141bb465dddebfd6ac5bf68b38c6da9889f58c5c6daa2b50021d05c6d36690e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-3cda1ec72102a56bb4ad0af8d929784a95d02067deac702e246f88f07c92ba34'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.schema-version=1.0, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true) Feb 20 04:31:55 localhost podman[263685]: 2026-02-20 09:31:55.828363495 +0000 UTC m=+0.493699629 container cleanup d141bb465dddebfd6ac5bf68b38c6da9889f58c5c6daa2b50021d05c6d36690e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-3cda1ec72102a56bb4ad0af8d929784a95d02067deac702e246f88f07c92ba34'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 20 04:31:55 localhost podman[263685]: neutron_dhcp_agent Feb 20 04:31:55 localhost podman[263724]: error opening file `/run/crun/d141bb465dddebfd6ac5bf68b38c6da9889f58c5c6daa2b50021d05c6d36690e/status`: No such file or directory Feb 20 04:31:55 localhost podman[263713]: 2026-02-20 09:31:55.926977393 +0000 UTC m=+0.067199076 container cleanup d141bb465dddebfd6ac5bf68b38c6da9889f58c5c6daa2b50021d05c6d36690e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-3cda1ec72102a56bb4ad0af8d929784a95d02067deac702e246f88f07c92ba34'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 20 04:31:55 localhost podman[263713]: neutron_dhcp_agent Feb 20 04:31:55 localhost systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully. Feb 20 04:31:55 localhost systemd[1]: Stopped neutron_dhcp_agent container. Feb 20 04:31:55 localhost systemd[1]: Starting neutron_dhcp_agent container... Feb 20 04:31:56 localhost systemd[1]: Started libcrun container. Feb 20 04:31:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649f8291ea12fe57a6c5c2192bae3d76cd72cd06828ac0906b67d1038dab6c49/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Feb 20 04:31:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649f8291ea12fe57a6c5c2192bae3d76cd72cd06828ac0906b67d1038dab6c49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:31:56 localhost podman[263726]: 2026-02-20 09:31:56.081477874 +0000 UTC m=+0.121378833 container init d141bb465dddebfd6ac5bf68b38c6da9889f58c5c6daa2b50021d05c6d36690e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-3cda1ec72102a56bb4ad0af8d929784a95d02067deac702e246f88f07c92ba34'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:31:56 localhost podman[263726]: 2026-02-20 09:31:56.090284878 +0000 UTC m=+0.130185827 container start d141bb465dddebfd6ac5bf68b38c6da9889f58c5c6daa2b50021d05c6d36690e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=neutron_dhcp, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-3cda1ec72102a56bb4ad0af8d929784a95d02067deac702e246f88f07c92ba34'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 20 04:31:56 localhost podman[263726]: neutron_dhcp_agent Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: + sudo -E kolla_set_configs Feb 20 04:31:56 localhost systemd[1]: Started neutron_dhcp_agent container. Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Validating config file Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Copying service configuration files Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Writing out command to execute Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/external Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29 Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/d91d8a949a4b5272256c667b5094a15f5e397c6793efbfa4186752b765c6923b Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/de929a91-c460-4398-96e0-15a80685a485 Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: ++ cat /run_command Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: + CMD=/usr/bin/neutron-dhcp-agent Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: + ARGS= Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: + sudo kolla_copy_cacerts Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: + [[ ! -n '' ]] Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: + . kolla_extend_start Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: Running command: '/usr/bin/neutron-dhcp-agent' Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: + umask 0022 Feb 20 04:31:56 localhost neutron_dhcp_agent[263741]: + exec /usr/bin/neutron-dhcp-agent Feb 20 04:31:57 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:57.410 263745 INFO neutron.common.config [-] Logging enabled!#033[00m Feb 20 04:31:57 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:57.410 263745 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev44#033[00m Feb 20 04:31:57 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:57.801 263745 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Feb 20 04:31:57 localhost systemd[1]: session-58.scope: Deactivated successfully. Feb 20 04:31:57 localhost systemd[1]: session-58.scope: Consumed 36.079s CPU time. Feb 20 04:31:57 localhost systemd-logind[760]: Session 58 logged out. Waiting for processes to exit. Feb 20 04:31:57 localhost systemd-logind[760]: Removed session 58. Feb 20 04:31:58 localhost openstack_network_exporter[243776]: ERROR 09:31:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:31:58 localhost openstack_network_exporter[243776]: Feb 20 04:31:58 localhost openstack_network_exporter[243776]: ERROR 09:31:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:31:58 localhost openstack_network_exporter[243776]: Feb 20 04:31:58 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:58.811 263745 INFO neutron.agent.dhcp.agent [None req-c5d18c8c-04c7-4094-b37f-95879a4d8cf8 - - - - - -] All active networks have been fetched through RPC.#033[00m Feb 20 04:31:58 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:58.812 263745 INFO neutron.agent.dhcp.agent [-] Starting network 84efa4de-646c-469c-b16c-6ab7c3e948cf dhcp configuration#033[00m Feb 20 04:31:58 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:58.867 263745 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpfvta3wst/privsep.sock']#033[00m Feb 20 04:31:58 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:58.867 263745 INFO neutron.agent.dhcp.agent [-] Starting network de929a91-c460-4398-96e0-15a80685a485 dhcp configuration#033[00m Feb 20 04:31:59 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:59.490 263745 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Feb 20 04:31:59 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:59.384 263778 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 20 04:31:59 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:59.388 263778 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 20 04:31:59 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:59.391 263778 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Feb 20 04:31:59 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:59.392 263778 INFO oslo.privsep.daemon [-] privsep daemon running as pid 263778#033[00m Feb 20 04:31:59 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:31:59.495 263745 WARNING oslo_privsep.priv_context [-] privsep daemon already running#033[00m Feb 20 04:32:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:00.027 263745 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmph58588ox/privsep.sock']#033[00m Feb 20 04:32:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:00.615 263745 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Feb 20 04:32:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:00.510 263788 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 20 04:32:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:00.515 263788 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 20 04:32:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:00.518 263788 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Feb 20 04:32:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:00.519 263788 INFO oslo.privsep.daemon [-] privsep daemon running as pid 263788#033[00m Feb 20 04:32:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:00.620 263745 WARNING oslo_privsep.priv_context [-] privsep daemon already running#033[00m Feb 20 04:32:01 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:01.596 263745 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpslttoqhw/privsep.sock']#033[00m Feb 20 04:32:02 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:02.204 263745 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Feb 20 04:32:02 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:02.101 263804 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 20 04:32:02 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:02.107 263804 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 20 04:32:02 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:02.111 263804 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Feb 20 04:32:02 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:02.112 263804 INFO oslo.privsep.daemon [-] privsep daemon running as pid 263804#033[00m Feb 20 04:32:02 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:02.207 263745 WARNING oslo_privsep.priv_context [-] privsep daemon already running#033[00m Feb 20 04:32:03 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:03.583 263745 INFO neutron.agent.linux.ip_lib [-] Device tap2016994f-6f cannot be used as it has no MAC address#033[00m Feb 20 04:32:03 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:03.588 263745 INFO neutron.agent.linux.ip_lib [-] Device tap5835cc2f-de cannot be used as it has no MAC address#033[00m Feb 20 04:32:03 localhost kernel: device tap5835cc2f-de entered promiscuous mode Feb 20 04:32:03 localhost NetworkManager[5967]: [1771579923.6637] manager: (tap5835cc2f-de): new Generic device (/org/freedesktop/NetworkManager/Devices/13) Feb 20 04:32:03 localhost ovn_controller[155916]: 2026-02-20T09:32:03Z|00025|binding|INFO|Claiming lport 5835cc2f-de21-4727-ae34-af2586f31970 for this chassis. Feb 20 04:32:03 localhost ovn_controller[155916]: 2026-02-20T09:32:03Z|00026|binding|INFO|5835cc2f-de21-4727-ae34-af2586f31970: Claiming unknown Feb 20 04:32:03 localhost systemd-udevd[263828]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:32:03 localhost kernel: device tap2016994f-6f entered promiscuous mode Feb 20 04:32:03 localhost NetworkManager[5967]: [1771579923.6774] manager: (tap2016994f-6f): new Generic device (/org/freedesktop/NetworkManager/Devices/14) Feb 20 04:32:03 localhost systemd-udevd[263831]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:32:03 localhost ovn_controller[155916]: 2026-02-20T09:32:03Z|00027|if_status|INFO|Not updating pb chassis for 2016994f-6fc5-4d1f-9bb8-eee5b0c59d46 now as sb is readonly Feb 20 04:32:03 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:03.686 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.3/24', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-de929a91-c460-4398-96e0-15a80685a485', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de929a91-c460-4398-96e0-15a80685a485', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91bce661d685472eb3e7cacab17bf52a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee1d7cd7-5f4f-4b75-a06c-f37c0ef97c77, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=5835cc2f-de21-4727-ae34-af2586f31970) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:32:03 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:03.688 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 5835cc2f-de21-4727-ae34-af2586f31970 in datapath de929a91-c460-4398-96e0-15a80685a485 bound to our chassis#033[00m Feb 20 04:32:03 localhost ovn_controller[155916]: 2026-02-20T09:32:03Z|00028|ovn_bfd|INFO|Enabled BFD on interface ovn-0c414b-0 Feb 20 04:32:03 localhost ovn_controller[155916]: 2026-02-20T09:32:03Z|00029|ovn_bfd|INFO|Enabled BFD on interface ovn-2275c3-0 Feb 20 04:32:03 localhost ovn_controller[155916]: 2026-02-20T09:32:03Z|00030|ovn_bfd|INFO|Enabled BFD on interface ovn-2df8cc-0 Feb 20 04:32:03 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:03.692 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port 5112ea38-d1bb-4956-bcaf-34d70fd27a1f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:32:03 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:03.693 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de929a91-c460-4398-96e0-15a80685a485, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:32:03 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:03.695 161766 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpqmjsehn3/privsep.sock']#033[00m Feb 20 04:32:03 localhost journal[229367]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, ) Feb 20 04:32:03 localhost journal[229367]: hostname: np0005625202.localdomain Feb 20 04:32:03 localhost journal[229367]: ethtool ioctl error on tap5835cc2f-de: No such device Feb 20 04:32:03 localhost journal[229367]: ethtool ioctl error on tap5835cc2f-de: No such device Feb 20 04:32:03 localhost journal[229367]: ethtool ioctl error on tap5835cc2f-de: No such device Feb 20 04:32:03 localhost journal[229367]: ethtool ioctl error on tap5835cc2f-de: No such device Feb 20 04:32:03 localhost journal[229367]: ethtool ioctl error on tap5835cc2f-de: No such device Feb 20 04:32:03 localhost journal[229367]: ethtool ioctl error on tap5835cc2f-de: No such device Feb 20 04:32:03 localhost ovn_controller[155916]: 2026-02-20T09:32:03Z|00031|binding|INFO|Claiming lport 2016994f-6fc5-4d1f-9bb8-eee5b0c59d46 for this chassis. Feb 20 04:32:03 localhost ovn_controller[155916]: 2026-02-20T09:32:03Z|00032|binding|INFO|2016994f-6fc5-4d1f-9bb8-eee5b0c59d46: Claiming unknown Feb 20 04:32:03 localhost journal[229367]: ethtool ioctl error on tap5835cc2f-de: No such device Feb 20 04:32:03 localhost journal[229367]: ethtool ioctl error on tap5835cc2f-de: No such device Feb 20 04:32:03 localhost ovn_controller[155916]: 2026-02-20T09:32:03Z|00033|binding|INFO|Setting lport 5835cc2f-de21-4727-ae34-af2586f31970 ovn-installed in OVS Feb 20 04:32:03 localhost ovn_controller[155916]: 2026-02-20T09:32:03Z|00034|binding|INFO|Setting lport 5835cc2f-de21-4727-ae34-af2586f31970 up in Southbound Feb 20 04:32:03 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:03.760 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-84efa4de-646c-469c-b16c-6ab7c3e948cf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84efa4de-646c-469c-b16c-6ab7c3e948cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91bce661d685472eb3e7cacab17bf52a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be632ef3-15dd-4221-8538-5c917f749b3c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=2016994f-6fc5-4d1f-9bb8-eee5b0c59d46) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:32:03 localhost ovn_controller[155916]: 2026-02-20T09:32:03Z|00035|binding|INFO|Setting lport 2016994f-6fc5-4d1f-9bb8-eee5b0c59d46 ovn-installed in OVS Feb 20 04:32:03 localhost ovn_controller[155916]: 2026-02-20T09:32:03Z|00036|binding|INFO|Setting lport 2016994f-6fc5-4d1f-9bb8-eee5b0c59d46 up in Southbound Feb 20 04:32:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.349 161766 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.350 161766 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpqmjsehn3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.215 263903 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.221 263903 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.224 263903 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.225 263903 INFO oslo.privsep.daemon [-] privsep daemon running as pid 263903#033[00m Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.353 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[a8abfba7-9c63-4796-953d-22d5426a5c35]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:32:04 localhost podman[263910]: 2026-02-20 09:32:04.458800222 +0000 UTC m=+0.097218263 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Feb 20 04:32:04 localhost podman[263910]: 2026-02-20 09:32:04.467870852 +0000 UTC m=+0.106288863 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:32:04 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:32:04 localhost podman[263975]: Feb 20 04:32:04 localhost podman[263975]: 2026-02-20 09:32:04.827245444 +0000 UTC m=+0.125907694 container create 94da0b0cc13bf7d0e8b33b0cb7f35a314f286fc13a25f45aac3e637e1d362533 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de929a91-c460-4398-96e0-15a80685a485, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.838 263903 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.838 263903 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.838 263903 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:32:04 localhost podman[263975]: 2026-02-20 09:32:04.744015464 +0000 UTC m=+0.042677754 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:32:04 localhost podman[263990]: Feb 20 04:32:04 localhost podman[263990]: 2026-02-20 09:32:04.864310007 +0000 UTC m=+0.103656983 container create d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true) Feb 20 04:32:04 localhost systemd[1]: Started libpod-conmon-94da0b0cc13bf7d0e8b33b0cb7f35a314f286fc13a25f45aac3e637e1d362533.scope. Feb 20 04:32:04 localhost systemd[1]: Started libcrun container. Feb 20 04:32:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/818f85c71f5041eb2b35d01506d68dbe4b5ad24e6e32fb4eb267ab31814373f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:32:04 localhost systemd[1]: Started libpod-conmon-d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5.scope. Feb 20 04:32:04 localhost podman[263990]: 2026-02-20 09:32:04.807354296 +0000 UTC m=+0.046701312 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:32:04 localhost podman[263975]: 2026-02-20 09:32:04.910817573 +0000 UTC m=+0.209479813 container init 94da0b0cc13bf7d0e8b33b0cb7f35a314f286fc13a25f45aac3e637e1d362533 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de929a91-c460-4398-96e0-15a80685a485, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 20 04:32:04 localhost systemd[1]: Started libcrun container. Feb 20 04:32:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63028490bac5781db9dace53236d888ccca70ea5b2e4bffba5dc132ee9d38a1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:32:04 localhost podman[263975]: 2026-02-20 09:32:04.922874112 +0000 UTC m=+0.221536352 container start 94da0b0cc13bf7d0e8b33b0cb7f35a314f286fc13a25f45aac3e637e1d362533 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de929a91-c460-4398-96e0-15a80685a485, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:32:04 localhost dnsmasq[264015]: started, version 2.85 cachesize 150 Feb 20 04:32:04 localhost dnsmasq[264015]: DNS service limited to local subnets Feb 20 04:32:04 localhost dnsmasq[264015]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:32:04 localhost dnsmasq[264015]: warning: no upstream servers configured Feb 20 04:32:04 localhost dnsmasq-dhcp[264015]: DHCP, static leases only on 192.168.0.0, lease time 1d Feb 20 04:32:04 localhost dnsmasq[264015]: read /var/lib/neutron/dhcp/de929a91-c460-4398-96e0-15a80685a485/addn_hosts - 2 addresses Feb 20 04:32:04 localhost dnsmasq-dhcp[264015]: read /var/lib/neutron/dhcp/de929a91-c460-4398-96e0-15a80685a485/host Feb 20 04:32:04 localhost dnsmasq-dhcp[264015]: read /var/lib/neutron/dhcp/de929a91-c460-4398-96e0-15a80685a485/opts Feb 20 04:32:04 localhost podman[263990]: 2026-02-20 09:32:04.933470014 +0000 UTC m=+0.172817020 container init d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true) Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.935 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f389b6-a56f-4f38-b610-b1ea9ea0e656]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.937 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 2016994f-6fc5-4d1f-9bb8-eee5b0c59d46 in datapath 84efa4de-646c-469c-b16c-6ab7c3e948cf unbound from our chassis#033[00m Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.939 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port d67aba8b-191a-4d01-ae53-32c7e3bf0e0c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.940 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84efa4de-646c-469c-b16c-6ab7c3e948cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:32:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:04.941 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[a2effe89-ecfe-4c03-b4c4-d3ee998b5384]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:32:04 localhost podman[263990]: 2026-02-20 09:32:04.942654497 +0000 UTC m=+0.182001504 container start d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 20 04:32:04 localhost dnsmasq[264017]: started, version 2.85 cachesize 150 Feb 20 04:32:04 localhost dnsmasq[264017]: DNS service limited to local subnets Feb 20 04:32:04 localhost dnsmasq[264017]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:32:04 localhost dnsmasq[264017]: warning: no upstream servers configured Feb 20 04:32:04 localhost dnsmasq-dhcp[264017]: DHCP, static leases only on 192.168.122.0, lease time 1d Feb 20 04:32:04 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:32:04 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:32:04 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:32:04 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:04.990 263745 INFO neutron.agent.dhcp.agent [None req-6cb2a659-55b9-4107-a8f2-2a86859c2725 - - - - - -] Finished network de929a91-c460-4398-96e0-15a80685a485 dhcp configuration#033[00m Feb 20 04:32:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:05.000 263745 INFO neutron.agent.dhcp.agent [None req-4f0f363c-4ca7-4e7e-b5c7-dd83a01ac345 - - - - - -] Finished network 84efa4de-646c-469c-b16c-6ab7c3e948cf dhcp configuration#033[00m Feb 20 04:32:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:05.000 263745 INFO neutron.agent.dhcp.agent [None req-c5d18c8c-04c7-4094-b37f-95879a4d8cf8 - - - - - -] Synchronizing state complete#033[00m Feb 20 04:32:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:05.054 263745 INFO neutron.agent.dhcp.agent [None req-c5d18c8c-04c7-4094-b37f-95879a4d8cf8 - - - - - -] DHCP agent started#033[00m Feb 20 04:32:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:32:05.868 263745 INFO neutron.agent.dhcp.agent [None req-ad789c73-24e3-4c6b-8419-c27d3fcf968b - - - - - -] DHCP configuration for ports {'5835cc2f-de21-4727-ae34-af2586f31970', '3323e11d-576a-42f3-bcca-e10425268e61', '191ff732-8d6f-4b10-b636-4686e43b5f3f', '0b0c160d-788a-4c79-ba76-76fa8a100a63', '0a8edc56-d3b6-44d0-8756-2809eb92bd14', '2016994f-6fc5-4d1f-9bb8-eee5b0c59d46', 'e7aa8e2a-27a6-452b-906c-21cea166b882', 'e90ced9b-729f-40fe-86c9-975463298c4c'} is completed#033[00m Feb 20 04:32:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:05.898 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:32:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:05.900 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:32:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:32:05.900 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:32:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:32:06 localhost podman[264018]: 2026-02-20 09:32:06.450427938 +0000 UTC m=+0.086389254 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, managed_by=edpm_ansible, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 20 04:32:06 localhost podman[264018]: 2026-02-20 09:32:06.471015555 +0000 UTC m=+0.106976901 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, architecture=x86_64, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1770267347, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=) Feb 20 04:32:06 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:32:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:32:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:32:08 localhost podman[264036]: 2026-02-20 09:32:08.460052674 +0000 UTC m=+0.095422755 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:32:08 localhost podman[264036]: 2026-02-20 09:32:08.50511452 +0000 UTC m=+0.140484621 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 20 04:32:08 localhost podman[264037]: 2026-02-20 09:32:08.518071705 +0000 UTC m=+0.150209510 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:32:08 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:32:08 localhost podman[264037]: 2026-02-20 09:32:08.551734618 +0000 UTC m=+0.183872423 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Feb 20 04:32:08 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:32:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16532 DF PROTO=TCP SPT=60442 DPT=9102 SEQ=3968198872 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B080DFE0000000001030307) Feb 20 04:32:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16533 DF PROTO=TCP SPT=60442 DPT=9102 SEQ=3968198872 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B08120D0000000001030307) Feb 20 04:32:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30388 DF PROTO=TCP SPT=58792 DPT=9102 SEQ=2627847181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B08140E0000000001030307) Feb 20 04:32:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16534 DF PROTO=TCP SPT=60442 DPT=9102 SEQ=3968198872 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B081A0D0000000001030307) Feb 20 04:32:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31405 DF PROTO=TCP SPT=51984 DPT=9102 SEQ=1003123831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B081E0D0000000001030307) Feb 20 04:32:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16535 DF PROTO=TCP SPT=60442 DPT=9102 SEQ=3968198872 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0829CD0000000001030307) Feb 20 04:32:16 localhost podman[241347]: time="2026-02-20T09:32:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:32:16 localhost podman[241347]: @ - - [20/Feb/2026:09:32:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150997 "" "Go-http-client/1.1" Feb 20 04:32:16 localhost podman[241347]: @ - - [20/Feb/2026:09:32:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17265 "" "Go-http-client/1.1" Feb 20 04:32:17 localhost nova_compute[229929]: 2026-02-20 09:32:17.534 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:32:17 localhost nova_compute[229929]: 2026-02-20 09:32:17.535 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:32:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:32:18 localhost podman[264163]: 2026-02-20 09:32:18.460123276 +0000 UTC m=+0.092207035 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:32:18 localhost podman[264163]: 2026-02-20 09:32:18.473348245 +0000 UTC m=+0.105431984 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:32:18 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:32:19 localhost nova_compute[229929]: 2026-02-20 09:32:19.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:32:19 localhost nova_compute[229929]: 2026-02-20 09:32:19.233 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:32:19 localhost nova_compute[229929]: 2026-02-20 09:32:19.233 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:32:19 localhost nova_compute[229929]: 2026-02-20 09:32:19.262 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:32:19 localhost nova_compute[229929]: 2026-02-20 09:32:19.262 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:32:19 localhost nova_compute[229929]: 2026-02-20 09:32:19.263 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:32:20 localhost nova_compute[229929]: 2026-02-20 09:32:20.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:32:20 localhost nova_compute[229929]: 2026-02-20 09:32:20.233 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:32:21 localhost nova_compute[229929]: 2026-02-20 09:32:21.233 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:32:22 localhost nova_compute[229929]: 2026-02-20 09:32:22.231 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.227 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.250 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.269 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.270 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.270 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.270 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.271 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.734 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.914 229933 WARNING nova.virt.libvirt.driver [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.915 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=12457MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.916 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.916 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.967 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.968 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:32:23 localhost nova_compute[229929]: 2026-02-20 09:32:23.982 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:32:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16536 DF PROTO=TCP SPT=60442 DPT=9102 SEQ=3968198872 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B084A0D0000000001030307) Feb 20 04:32:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:32:24 localhost nova_compute[229929]: 2026-02-20 09:32:24.418 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:32:24 localhost podman[264228]: 2026-02-20 09:32:24.425499625 +0000 UTC m=+0.068316174 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:32:24 localhost nova_compute[229929]: 2026-02-20 09:32:24.428 229933 DEBUG nova.compute.provider_tree [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:32:24 localhost podman[264228]: 2026-02-20 09:32:24.437117951 +0000 UTC m=+0.079934490 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:32:24 localhost nova_compute[229929]: 2026-02-20 09:32:24.444 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:32:24 localhost nova_compute[229929]: 2026-02-20 09:32:24.447 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:32:24 localhost nova_compute[229929]: 2026-02-20 09:32:24.448 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:32:24 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:32:28 localhost openstack_network_exporter[243776]: ERROR 09:32:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:32:28 localhost openstack_network_exporter[243776]: Feb 20 04:32:28 localhost openstack_network_exporter[243776]: ERROR 09:32:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:32:28 localhost openstack_network_exporter[243776]: Feb 20 04:32:33 localhost ovn_controller[155916]: 2026-02-20T09:32:33Z|00037|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory Feb 20 04:32:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:32:35 localhost podman[264253]: 2026-02-20 09:32:35.442103428 +0000 UTC m=+0.080908157 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:32:35 localhost podman[264253]: 2026-02-20 09:32:35.456408548 +0000 UTC m=+0.095213237 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:32:35 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:32:36 localhost sshd[264272]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:32:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:32:37 localhost podman[264274]: 2026-02-20 09:32:37.457202653 +0000 UTC m=+0.086147220 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, architecture=x86_64, vcs-type=git, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, vendor=Red Hat, Inc.) Feb 20 04:32:37 localhost podman[264274]: 2026-02-20 09:32:37.469637852 +0000 UTC m=+0.098582359 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, distribution-scope=public, vcs-type=git, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, release=1770267347, config_id=openstack_network_exporter, managed_by=edpm_ansible, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 20 04:32:37 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:32:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14532 DF PROTO=TCP SPT=58588 DPT=9102 SEQ=27896571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B08832E0000000001030307) Feb 20 04:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:32:39 localhost podman[264294]: 2026-02-20 09:32:39.451325938 +0000 UTC m=+0.082705377 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:32:39 localhost podman[264294]: 2026-02-20 09:32:39.535607395 +0000 UTC m=+0.166986784 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 20 04:32:39 localhost podman[264295]: 2026-02-20 09:32:39.539792249 +0000 UTC m=+0.167751004 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 04:32:39 localhost podman[264295]: 2026-02-20 09:32:39.57394949 +0000 UTC m=+0.201908245 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:32:39 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:32:39 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:32:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14533 DF PROTO=TCP SPT=58588 DPT=9102 SEQ=27896571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B08874E0000000001030307) Feb 20 04:32:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16537 DF PROTO=TCP SPT=60442 DPT=9102 SEQ=3968198872 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B088A0D0000000001030307) Feb 20 04:32:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14534 DF PROTO=TCP SPT=58588 DPT=9102 SEQ=27896571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B088F4E0000000001030307) Feb 20 04:32:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30389 DF PROTO=TCP SPT=58792 DPT=9102 SEQ=2627847181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B08920E0000000001030307) Feb 20 04:32:43 localhost sshd[264336]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:32:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14535 DF PROTO=TCP SPT=58588 DPT=9102 SEQ=27896571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B089F0D0000000001030307) Feb 20 04:32:46 localhost podman[241347]: time="2026-02-20T09:32:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:32:46 localhost podman[241347]: @ - - [20/Feb/2026:09:32:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150997 "" "Go-http-client/1.1" Feb 20 04:32:46 localhost podman[241347]: @ - - [20/Feb/2026:09:32:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17264 "" "Go-http-client/1.1" Feb 20 04:32:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:32:49 localhost podman[264338]: 2026-02-20 09:32:49.451793781 +0000 UTC m=+0.087634871 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:32:49 localhost podman[264338]: 2026-02-20 09:32:49.464830116 +0000 UTC m=+0.100671256 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:32:49 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:32:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14536 DF PROTO=TCP SPT=58588 DPT=9102 SEQ=27896571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B08C00E0000000001030307) Feb 20 04:32:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:32:55 localhost podman[264362]: 2026-02-20 09:32:55.438224161 +0000 UTC m=+0.076489316 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:32:55 localhost podman[264362]: 2026-02-20 09:32:55.449716455 +0000 UTC m=+0.087981630 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:32:55 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:32:56 localhost sshd[264386]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:32:57 localhost systemd-logind[760]: New session 59 of user zuul. Feb 20 04:32:57 localhost systemd[1]: Started Session 59 of User zuul. Feb 20 04:32:58 localhost openstack_network_exporter[243776]: ERROR 09:32:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:32:58 localhost openstack_network_exporter[243776]: Feb 20 04:32:58 localhost openstack_network_exporter[243776]: ERROR 09:32:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:32:58 localhost openstack_network_exporter[243776]: Feb 20 04:32:58 localhost python3.9[264497]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:32:59 localhost python3.9[264609]: ansible-ansible.builtin.service_facts Invoked Feb 20 04:32:59 localhost network[264626]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 04:32:59 localhost network[264627]: 'network-scripts' will be removed from distribution in near future. Feb 20 04:32:59 localhost network[264628]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 04:33:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:33:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:33:05 localhost systemd[1]: tmp-crun.G3D6N8.mount: Deactivated successfully. Feb 20 04:33:05 localhost podman[264861]: 2026-02-20 09:33:05.887757236 +0000 UTC m=+0.117241677 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 20 04:33:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:33:05.898 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:33:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:33:05.900 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:33:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:33:05.900 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:33:05 localhost podman[264861]: 2026-02-20 09:33:05.904740539 +0000 UTC m=+0.134224980 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:33:05 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:33:06 localhost python3.9[264860]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 20 04:33:07 localhost python3.9[264941]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:33:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:33:08 localhost podman[264944]: 2026-02-20 09:33:08.451768196 +0000 UTC m=+0.085581394 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=openstack_network_exporter, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, version=9.7, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 04:33:08 localhost podman[264944]: 2026-02-20 09:33:08.491325075 +0000 UTC m=+0.125138263 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, release=1770267347, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, config_id=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7) Feb 20 04:33:08 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:33:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48228 DF PROTO=TCP SPT=43154 DPT=9102 SEQ=4188731694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B08F85D0000000001030307) Feb 20 04:33:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48229 DF PROTO=TCP SPT=43154 DPT=9102 SEQ=4188731694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B08FC4D0000000001030307) Feb 20 04:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:33:10 localhost podman[264965]: 2026-02-20 09:33:10.437571843 +0000 UTC m=+0.070296068 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:33:10 localhost podman[264965]: 2026-02-20 09:33:10.466676816 +0000 UTC m=+0.099401071 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 04:33:10 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:33:10 localhost podman[264964]: 2026-02-20 09:33:10.552018443 +0000 UTC m=+0.185511149 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 20 04:33:10 localhost podman[264964]: 2026-02-20 09:33:10.619973965 +0000 UTC m=+0.253466731 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 20 04:33:10 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:33:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14537 DF PROTO=TCP SPT=58588 DPT=9102 SEQ=27896571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09000D0000000001030307) Feb 20 04:33:11 localhost python3.9[265116]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:33:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48230 DF PROTO=TCP SPT=43154 DPT=9102 SEQ=4188731694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09044D0000000001030307) Feb 20 04:33:12 localhost python3.9[265226]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:33:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16538 DF PROTO=TCP SPT=60442 DPT=9102 SEQ=3968198872 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09080D0000000001030307) Feb 20 04:33:13 localhost python3.9[265337]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:33:15 localhost python3.9[265485]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:33:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48231 DF PROTO=TCP SPT=43154 DPT=9102 SEQ=4188731694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09140E0000000001030307) Feb 20 04:33:16 localhost podman[241347]: time="2026-02-20T09:33:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:33:16 localhost podman[241347]: @ - - [20/Feb/2026:09:33:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150997 "" "Go-http-client/1.1" Feb 20 04:33:16 localhost podman[241347]: @ - - [20/Feb/2026:09:33:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17274 "" "Go-http-client/1.1" Feb 20 04:33:16 localhost python3.9[265683]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:33:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:33:17 localhost nova_compute[229929]: 2026-02-20 09:33:17.449 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:33:17 localhost python3.9[265813]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:33:18 localhost nova_compute[229929]: 2026-02-20 09:33:18.233 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:33:18 localhost python3.9[265923]: ansible-ansible.builtin.service_facts Invoked Feb 20 04:33:18 localhost network[265940]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 04:33:18 localhost network[265941]: 'network-scripts' will be removed from distribution in near future. Feb 20 04:33:18 localhost network[265942]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 04:33:19 localhost nova_compute[229929]: 2026-02-20 09:33:19.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:33:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:33:19 localhost podman[265950]: 2026-02-20 09:33:19.750870551 +0000 UTC m=+0.074756606 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:33:19 localhost podman[265950]: 2026-02-20 09:33:19.760599459 +0000 UTC m=+0.084485524 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:33:19 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:33:20 localhost nova_compute[229929]: 2026-02-20 09:33:20.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:33:20 localhost nova_compute[229929]: 2026-02-20 09:33:20.233 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:33:20 localhost nova_compute[229929]: 2026-02-20 09:33:20.233 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:33:20 localhost nova_compute[229929]: 2026-02-20 09:33:20.253 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:33:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:33:21 localhost nova_compute[229929]: 2026-02-20 09:33:21.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:33:22 localhost nova_compute[229929]: 2026-02-20 09:33:22.231 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:33:22 localhost nova_compute[229929]: 2026-02-20 09:33:22.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:33:22 localhost nova_compute[229929]: 2026-02-20 09:33:22.232 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:33:23 localhost nova_compute[229929]: 2026-02-20 09:33:23.233 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:33:23 localhost python3.9[266197]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:33:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48232 DF PROTO=TCP SPT=43154 DPT=9102 SEQ=4188731694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09340D0000000001030307) Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.257 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.258 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.258 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.259 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.259 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.720 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.910 229933 WARNING nova.virt.libvirt.driver [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.912 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=12449MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.913 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.913 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.983 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:33:24 localhost nova_compute[229929]: 2026-02-20 09:33:24.985 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:33:25 localhost nova_compute[229929]: 2026-02-20 09:33:25.010 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:33:25 localhost nova_compute[229929]: 2026-02-20 09:33:25.423 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:33:25 localhost nova_compute[229929]: 2026-02-20 09:33:25.432 229933 DEBUG nova.compute.provider_tree [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:33:25 localhost nova_compute[229929]: 2026-02-20 09:33:25.457 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:33:25 localhost nova_compute[229929]: 2026-02-20 09:33:25.460 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:33:25 localhost nova_compute[229929]: 2026-02-20 09:33:25.460 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:33:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:33:26 localhost podman[266244]: 2026-02-20 09:33:26.44776921 +0000 UTC m=+0.085845672 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:33:26 localhost podman[266244]: 2026-02-20 09:33:26.461872697 +0000 UTC m=+0.099949149 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:33:26 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:33:28 localhost openstack_network_exporter[243776]: ERROR 09:33:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:33:28 localhost openstack_network_exporter[243776]: Feb 20 04:33:28 localhost openstack_network_exporter[243776]: ERROR 09:33:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:33:28 localhost openstack_network_exporter[243776]: Feb 20 04:33:28 localhost python3.9[266377]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 20 04:33:29 localhost python3.9[266487]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Feb 20 04:33:29 localhost systemd-journald[48906]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Feb 20 04:33:29 localhost systemd-journald[48906]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 20 04:33:29 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 04:33:29 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 04:33:30 localhost python3.9[266598]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:33:30 localhost python3.9[266655]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:33:31 localhost python3.9[266765]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:33:32 localhost python3.9[266875]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:33:32 localhost python3.9[266986]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:33:33 localhost python3.9[267097]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:33:34 localhost python3.9[267209]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:33:35 localhost python3.9[267320]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:33:35 localhost python3.9[267430]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:33:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:33:36 localhost systemd[1]: tmp-crun.Q03tuF.mount: Deactivated successfully. Feb 20 04:33:36 localhost podman[267540]: 2026-02-20 09:33:36.458128185 +0000 UTC m=+0.092472374 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute) Feb 20 04:33:36 localhost podman[267540]: 2026-02-20 09:33:36.467685648 +0000 UTC m=+0.102029867 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 20 04:33:36 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:33:36 localhost python3.9[267541]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:33:37 localhost python3.9[267669]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:33:37 localhost python3.9[267779]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:33:38 localhost python3.9[267889]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:33:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25571 DF PROTO=TCP SPT=56612 DPT=9102 SEQ=2555410258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B096D8D0000000001030307) Feb 20 04:33:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:33:39 localhost podman[268002]: 2026-02-20 09:33:39.334024271 +0000 UTC m=+0.080399482 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1770267347, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, config_id=openstack_network_exporter) Feb 20 04:33:39 localhost podman[268002]: 2026-02-20 09:33:39.35110251 +0000 UTC m=+0.097477721 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter) Feb 20 04:33:39 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:33:39 localhost python3.9[268001]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:33:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25572 DF PROTO=TCP SPT=56612 DPT=9102 SEQ=2555410258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09718D0000000001030307) Feb 20 04:33:40 localhost python3.9[268131]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:33:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48233 DF PROTO=TCP SPT=43154 DPT=9102 SEQ=4188731694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09740D0000000001030307) Feb 20 04:33:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:33:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:33:41 localhost podman[268133]: 2026-02-20 09:33:41.450422933 +0000 UTC m=+0.088123403 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:33:41 localhost podman[268133]: 2026-02-20 09:33:41.525466767 +0000 UTC m=+0.163167197 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:33:41 localhost systemd[1]: tmp-crun.I6HqlY.mount: Deactivated successfully. Feb 20 04:33:41 localhost podman[268134]: 2026-02-20 09:33:41.539508853 +0000 UTC m=+0.174778496 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent) Feb 20 04:33:41 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:33:41 localhost podman[268134]: 2026-02-20 09:33:41.546964439 +0000 UTC m=+0.182234092 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:33:41 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:33:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25573 DF PROTO=TCP SPT=56612 DPT=9102 SEQ=2555410258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09798D0000000001030307) Feb 20 04:33:42 localhost sshd[268263]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:33:42 localhost python3.9[268285]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Feb 20 04:33:43 localhost python3.9[268395]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Feb 20 04:33:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14538 DF PROTO=TCP SPT=58588 DPT=9102 SEQ=27896571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B097E0D0000000001030307) Feb 20 04:33:43 localhost python3.9[268505]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:33:44 localhost python3.9[268562]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:33:45 localhost python3.9[268672]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:33:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25574 DF PROTO=TCP SPT=56612 DPT=9102 SEQ=2555410258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09894D0000000001030307) Feb 20 04:33:46 localhost podman[241347]: time="2026-02-20T09:33:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:33:46 localhost podman[241347]: @ - - [20/Feb/2026:09:33:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150997 "" "Go-http-client/1.1" Feb 20 04:33:46 localhost podman[241347]: @ - - [20/Feb/2026:09:33:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17266 "" "Go-http-client/1.1" Feb 20 04:33:46 localhost python3.9[268782]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 20 04:33:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:33:50 localhost podman[268788]: 2026-02-20 09:33:50.006652056 +0000 UTC m=+0.076563557 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:33:50 localhost podman[268788]: 2026-02-20 09:33:50.041201985 +0000 UTC m=+0.111113476 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:33:50 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:33:50 localhost python3.9[268915]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 20 04:33:51 localhost python3.9[269029]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:33:52 localhost python3.9[269139]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 04:33:52 localhost systemd[1]: Reloading. Feb 20 04:33:52 localhost systemd-rc-local-generator[269164]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:33:52 localhost systemd-sysv-generator[269170]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:33:52 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:33:53 localhost python3.9[269283]: ansible-ansible.builtin.service_facts Invoked Feb 20 04:33:53 localhost network[269300]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Feb 20 04:33:53 localhost network[269301]: 'network-scripts' will be removed from distribution in near future. Feb 20 04:33:53 localhost network[269302]: It is advised to switch to 'NetworkManager' instead for network management. Feb 20 04:33:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25575 DF PROTO=TCP SPT=56612 DPT=9102 SEQ=2555410258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09AA0D0000000001030307) Feb 20 04:33:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:33:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:33:56 localhost systemd[1]: tmp-crun.lzCTvD.mount: Deactivated successfully. Feb 20 04:33:56 localhost podman[269401]: 2026-02-20 09:33:56.624056108 +0000 UTC m=+0.099787546 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:33:56 localhost podman[269401]: 2026-02-20 09:33:56.638723081 +0000 UTC m=+0.114454549 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:33:56 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:33:58 localhost openstack_network_exporter[243776]: ERROR 09:33:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:33:58 localhost openstack_network_exporter[243776]: Feb 20 04:33:58 localhost openstack_network_exporter[243776]: ERROR 09:33:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:33:58 localhost openstack_network_exporter[243776]: Feb 20 04:33:58 localhost python3.9[269556]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:33:59 localhost python3.9[269667]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:34:00 localhost python3.9[269778]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:34:01 localhost python3.9[269889]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:34:01 localhost sshd[269954]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:34:01 localhost python3.9[270001]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:34:02 localhost python3.9[270113]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:34:02 localhost sshd[270132]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:34:03 localhost python3.9[270226]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:34:04 localhost python3.9[270337]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:34:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:34:05.900 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:34:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:34:05.901 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:34:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:34:05.901 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:34:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:34:07 localhost podman[270410]: 2026-02-20 09:34:07.465052701 +0000 UTC m=+0.098834519 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:34:07 localhost podman[270410]: 2026-02-20 09:34:07.473998787 +0000 UTC m=+0.107780605 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:34:07 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:34:07 localhost python3.9[270467]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:08 localhost python3.9[270577]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55991 DF PROTO=TCP SPT=48082 DPT=9102 SEQ=1641673507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09E2BD0000000001030307) Feb 20 04:34:08 localhost python3.9[270687]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:34:09 localhost podman[270798]: 2026-02-20 09:34:09.486587965 +0000 UTC m=+0.088495904 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.buildah.version=1.33.7, vcs-type=git, name=ubi9/ubi-minimal, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., version=9.7) Feb 20 04:34:09 localhost podman[270798]: 2026-02-20 09:34:09.504824817 +0000 UTC m=+0.106732736 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, architecture=x86_64, io.openshift.expose-services=, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2026-02-05T04:57:10Z, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1770267347, version=9.7, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9) Feb 20 04:34:09 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:34:09 localhost python3.9[270797]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55992 DF PROTO=TCP SPT=48082 DPT=9102 SEQ=1641673507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09E6CE0000000001030307) Feb 20 04:34:10 localhost python3.9[270927]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25576 DF PROTO=TCP SPT=56612 DPT=9102 SEQ=2555410258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09EA0D0000000001030307) Feb 20 04:34:10 localhost python3.9[271037]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:11 localhost python3.9[271147]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:34:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:34:11 localhost podman[271258]: 2026-02-20 09:34:11.957900236 +0000 UTC m=+0.089170142 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Feb 20 04:34:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55993 DF PROTO=TCP SPT=48082 DPT=9102 SEQ=1641673507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09EECD0000000001030307) Feb 20 04:34:12 localhost podman[271259]: 2026-02-20 09:34:12.034805721 +0000 UTC m=+0.160936516 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent) Feb 20 04:34:12 localhost podman[271258]: 2026-02-20 09:34:12.056819527 +0000 UTC m=+0.188089443 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Feb 20 04:34:12 localhost podman[271259]: 2026-02-20 09:34:12.06894969 +0000 UTC m=+0.195080525 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Feb 20 04:34:12 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:34:12 localhost python3.9[271257]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:12 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:34:12 localhost python3.9[271412]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48234 DF PROTO=TCP SPT=43154 DPT=9102 SEQ=4188731694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09F20D0000000001030307) Feb 20 04:34:13 localhost nova_compute[229929]: 2026-02-20 09:34:13.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:13 localhost nova_compute[229929]: 2026-02-20 09:34:13.233 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 20 04:34:13 localhost nova_compute[229929]: 2026-02-20 09:34:13.252 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 20 04:34:13 localhost nova_compute[229929]: 2026-02-20 09:34:13.253 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:13 localhost python3.9[271522]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:14 localhost python3.9[271632]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:14 localhost python3.9[271742]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:15 localhost python3.9[271852]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55994 DF PROTO=TCP SPT=48082 DPT=9102 SEQ=1641673507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B09FE8E0000000001030307) Feb 20 04:34:16 localhost podman[241347]: time="2026-02-20T09:34:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:34:16 localhost podman[241347]: @ - - [20/Feb/2026:09:34:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150997 "" "Go-http-client/1.1" Feb 20 04:34:16 localhost podman[241347]: @ - - [20/Feb/2026:09:34:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17269 "" "Go-http-client/1.1" Feb 20 04:34:16 localhost sshd[271907]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:34:16 localhost nova_compute[229929]: 2026-02-20 09:34:16.266 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:16 localhost python3.9[271963]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:16 localhost nova_compute[229929]: 2026-02-20 09:34:16.718 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:17 localhost python3.9[272073]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:18 localhost python3.9[272252]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:19 localhost nova_compute[229929]: 2026-02-20 09:34:19.248 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:19 localhost python3.9[272380]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:34:20 localhost nova_compute[229929]: 2026-02-20 09:34:20.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:34:20 localhost python3.9[272490]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Feb 20 04:34:20 localhost systemd[1]: tmp-crun.TqiQ24.mount: Deactivated successfully. Feb 20 04:34:20 localhost podman[272491]: 2026-02-20 09:34:20.455472058 +0000 UTC m=+0.092810263 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:34:20 localhost podman[272491]: 2026-02-20 09:34:20.492764623 +0000 UTC m=+0.130102798 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:34:20 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:34:21 localhost nova_compute[229929]: 2026-02-20 09:34:21.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:21 localhost nova_compute[229929]: 2026-02-20 09:34:21.232 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:34:21 localhost nova_compute[229929]: 2026-02-20 09:34:21.233 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:34:21 localhost nova_compute[229929]: 2026-02-20 09:34:21.249 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:34:21 localhost python3.9[272623]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Feb 20 04:34:21 localhost systemd[1]: Reloading. Feb 20 04:34:21 localhost systemd-rc-local-generator[272646]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:34:21 localhost systemd-sysv-generator[272653]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:34:22 localhost nova_compute[229929]: 2026-02-20 09:34:22.231 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:22 localhost python3.9[272768]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:34:22 localhost python3.9[272879]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:34:23 localhost nova_compute[229929]: 2026-02-20 09:34:23.231 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:23 localhost nova_compute[229929]: 2026-02-20 09:34:23.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:23 localhost nova_compute[229929]: 2026-02-20 09:34:23.233 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:34:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55995 DF PROTO=TCP SPT=48082 DPT=9102 SEQ=1641673507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0A1E0D0000000001030307) Feb 20 04:34:24 localhost nova_compute[229929]: 2026-02-20 09:34:24.228 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:24 localhost python3.9[272990]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:34:25 localhost nova_compute[229929]: 2026-02-20 09:34:25.231 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:25 localhost python3.9[273101]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:34:26 localhost nova_compute[229929]: 2026-02-20 09:34:26.232 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:26 localhost nova_compute[229929]: 2026-02-20 09:34:26.249 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:34:26 localhost nova_compute[229929]: 2026-02-20 09:34:26.250 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:34:26 localhost nova_compute[229929]: 2026-02-20 09:34:26.250 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:34:26 localhost nova_compute[229929]: 2026-02-20 09:34:26.251 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:34:26 localhost nova_compute[229929]: 2026-02-20 09:34:26.251 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:34:26 localhost nova_compute[229929]: 2026-02-20 09:34:26.717 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:34:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:34:26 localhost nova_compute[229929]: 2026-02-20 09:34:26.958 229933 WARNING nova.virt.libvirt.driver [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:34:26 localhost nova_compute[229929]: 2026-02-20 09:34:26.961 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=12494MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:34:26 localhost nova_compute[229929]: 2026-02-20 09:34:26.961 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:34:26 localhost nova_compute[229929]: 2026-02-20 09:34:26.962 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:34:27 localhost podman[273234]: 2026-02-20 09:34:27.006472778 +0000 UTC m=+0.104234801 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:34:27 localhost podman[273234]: 2026-02-20 09:34:27.016301703 +0000 UTC m=+0.114063686 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:34:27 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.063 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.064 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:34:27 localhost python3.9[273235]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.116 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Refreshing inventories for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.161 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Updating ProviderTree inventory for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.162 229933 DEBUG nova.compute.provider_tree [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Updating inventory in ProviderTree for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.176 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Refreshing aggregate associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.205 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Refreshing trait associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, traits: COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_AESNI,HW_CPU_X86_SSE4A,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.217 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.681 229933 DEBUG oslo_concurrency.processutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.688 229933 DEBUG nova.compute.provider_tree [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.710 229933 DEBUG nova.scheduler.client.report [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.712 229933 DEBUG nova.compute.resource_tracker [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.713 229933 DEBUG oslo_concurrency.lockutils [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.714 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:34:27 localhost nova_compute[229929]: 2026-02-20 09:34:27.714 229933 DEBUG nova.compute.manager [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 20 04:34:28 localhost openstack_network_exporter[243776]: ERROR 09:34:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:34:28 localhost openstack_network_exporter[243776]: Feb 20 04:34:28 localhost openstack_network_exporter[243776]: ERROR 09:34:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:34:28 localhost openstack_network_exporter[243776]: Feb 20 04:34:28 localhost sshd[273392]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:34:28 localhost python3.9[273391]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:34:29 localhost python3.9[273504]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:34:29 localhost python3.9[273615]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:34:31 localhost python3.9[273726]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:31 localhost python3.9[273836]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:31 localhost sshd[273854]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:34:32 localhost python3.9[273948]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:33 localhost python3.9[274058]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:34 localhost python3.9[274168]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:34 localhost python3.9[274278]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:35 localhost python3.9[274388]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:35 localhost python3.9[274498]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:36 localhost python3.9[274608]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:34:38 localhost systemd[1]: tmp-crun.Uv46H2.mount: Deactivated successfully. Feb 20 04:34:38 localhost podman[274626]: 2026-02-20 09:34:38.454958216 +0000 UTC m=+0.090751887 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:34:38 localhost podman[274626]: 2026-02-20 09:34:38.468735857 +0000 UTC m=+0.104529568 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute) Feb 20 04:34:38 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:34:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29675 DF PROTO=TCP SPT=33612 DPT=9102 SEQ=1135383406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0A57ED0000000001030307) Feb 20 04:34:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29676 DF PROTO=TCP SPT=33612 DPT=9102 SEQ=1135383406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0A5C0E0000000001030307) Feb 20 04:34:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:34:40 localhost podman[274646]: 2026-02-20 09:34:40.454498022 +0000 UTC m=+0.092795902 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, release=1770267347, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, version=9.7, maintainer=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter) Feb 20 04:34:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55996 DF PROTO=TCP SPT=48082 DPT=9102 SEQ=1641673507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0A5E0E0000000001030307) Feb 20 04:34:40 localhost podman[274646]: 2026-02-20 09:34:40.498304992 +0000 UTC m=+0.136602892 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., release=1770267347, build-date=2026-02-05T04:57:10Z, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public) Feb 20 04:34:40 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:34:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29677 DF PROTO=TCP SPT=33612 DPT=9102 SEQ=1135383406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0A640D0000000001030307) Feb 20 04:34:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:34:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:34:42 localhost podman[274666]: 2026-02-20 09:34:42.458098428 +0000 UTC m=+0.084181820 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller) Feb 20 04:34:42 localhost podman[274667]: 2026-02-20 09:34:42.547678293 +0000 UTC m=+0.169699416 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:34:42 localhost podman[274666]: 2026-02-20 09:34:42.556904781 +0000 UTC m=+0.182988173 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:34:42 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:34:42 localhost podman[274667]: 2026-02-20 09:34:42.579576442 +0000 UTC m=+0.201597575 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS) Feb 20 04:34:42 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:34:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25577 DF PROTO=TCP SPT=56612 DPT=9102 SEQ=2555410258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0A680D0000000001030307) Feb 20 04:34:43 localhost python3.9[274801]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Feb 20 04:34:44 localhost sshd[274820]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:34:44 localhost systemd-logind[760]: New session 60 of user zuul. Feb 20 04:34:45 localhost systemd[1]: Started Session 60 of User zuul. Feb 20 04:34:45 localhost systemd[1]: session-60.scope: Deactivated successfully. Feb 20 04:34:45 localhost systemd-logind[760]: Session 60 logged out. Waiting for processes to exit. Feb 20 04:34:45 localhost systemd-logind[760]: Removed session 60. Feb 20 04:34:45 localhost python3.9[274931]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:34:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29678 DF PROTO=TCP SPT=33612 DPT=9102 SEQ=1135383406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0A73CD0000000001030307) Feb 20 04:34:46 localhost podman[241347]: time="2026-02-20T09:34:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:34:46 localhost podman[241347]: @ - - [20/Feb/2026:09:34:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150997 "" "Go-http-client/1.1" Feb 20 04:34:46 localhost podman[241347]: @ - - [20/Feb/2026:09:34:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17267 "" "Go-http-client/1.1" Feb 20 04:34:46 localhost python3.9[274986]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:46 localhost python3.9[275094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:34:47 localhost python3.9[275180]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771580086.4343972-2355-41046891241156/.source _original_basename=ssh-config follow=False checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:47 localhost python3.9[275288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:34:48 localhost python3.9[275374]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771580087.5291667-2355-230901990995186/.source.py _original_basename=nova_statedir_ownership.py follow=False checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:48 localhost python3.9[275482]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:34:49 localhost python3.9[275568]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771580088.548547-2355-234016218439364/.source _original_basename=run-on-host follow=False checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:50 localhost python3.9[275676]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:34:51 localhost python3.9[275762]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771580090.1240568-2517-15939536548810/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=9126091aab7eef145bc487e7e4a566b4a9e47220 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:34:51 localhost podman[275763]: 2026-02-20 09:34:51.472447546 +0000 UTC m=+0.097767197 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:34:51 localhost podman[275763]: 2026-02-20 09:34:51.484805418 +0000 UTC m=+0.110125049 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:34:51 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:34:52 localhost python3.9[275895]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:52 localhost python3.9[276005]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:53 localhost python3.9[276115]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:34:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29679 DF PROTO=TCP SPT=33612 DPT=9102 SEQ=1135383406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0A940D0000000001030307) Feb 20 04:34:54 localhost python3.9[276227]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:55 localhost python3.9[276335]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:34:56 localhost python3.9[276447]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:56 localhost sshd[276557]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:34:56 localhost python3.9[276559]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:34:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:34:57 localhost podman[276631]: 2026-02-20 09:34:57.45158381 +0000 UTC m=+0.082209836 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:34:57 localhost podman[276631]: 2026-02-20 09:34:57.490051637 +0000 UTC m=+0.120677683 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:34:57 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:34:57 localhost python3.9[276690]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute_init state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:34:58 localhost openstack_network_exporter[243776]: ERROR 09:34:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:34:58 localhost openstack_network_exporter[243776]: Feb 20 04:34:58 localhost openstack_network_exporter[243776]: ERROR 09:34:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:34:58 localhost openstack_network_exporter[243776]: Feb 20 04:35:00 localhost python3.9[276996]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute_init config_pattern=*.json debug=False Feb 20 04:35:01 localhost python3.9[277106]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 20 04:35:03 localhost python3[277216]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute_init config_id=nova_compute_init config_overrides={} config_patterns=*.json containers=['nova_compute_init'] log_base_path=/var/log/containers/stdouts debug=False Feb 20 04:35:03 localhost python3[277216]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd",#012 "Digest": "sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-01-30T06:31:38.534497001Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1214548351,#012 "VirtualSize": 1214548351,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",#012 "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",#012 "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",#012 "sha256:6cac2e473d63cf2a9b8ef2ea3f4fbc7fb780c57021c3588efd56da3aa8cf8843",#012 "sha256:927dd86a09392106af537557be80232b7e8ca154daa00857c24fe20f9e550a50"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2026-01-28T05:56:51.126388624Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:51.126459235Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260127\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:53.726938221Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-01-30T06:10:18.890429494Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890534417Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890553228Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890570688Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890616649Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890659121Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:19.232761948Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:52.670543613Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Feb 20 04:35:03 localhost podman[277266]: 2026-02-20 09:35:03.404367524 +0000 UTC m=+0.068173828 container remove 66af039b890df51100ccb41f4acf5517eb836b613e1f9e398f4f08e1ae1ca156 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 20 04:35:03 localhost python3[277216]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute_init Feb 20 04:35:03 localhost podman[277279]: Feb 20 04:35:03 localhost podman[277279]: 2026-02-20 09:35:03.510282069 +0000 UTC m=+0.088003983 container create 29f11e275ca2f653c4911d7a399e43eccafc082e16d52ec9d06a475965db1dea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=nova_compute_init, container_name=nova_compute_init, org.label-schema.schema-version=1.0) Feb 20 04:35:03 localhost podman[277279]: 2026-02-20 09:35:03.468671458 +0000 UTC m=+0.046393402 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 20 04:35:03 localhost python3[277216]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --env EDPM_CONFIG_HASH=832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd --label config_id=nova_compute_init --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Feb 20 04:35:04 localhost python3.9[277427]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:35:05 localhost python3.9[277537]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 20 04:35:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:35:05.901 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:35:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:35:05.902 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:35:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:35:05.902 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:35:07 localhost python3.9[277647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:35:07 localhost python3.9[277737]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771580106.638867-2988-127383962682734/.source.yaml _original_basename=.c6rqelq4 follow=False checksum=4d557a266f0e30e386f17a3d7c6078d564f9be8b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:35:08 localhost systemd[1]: tmp-crun.UrYIJE.mount: Deactivated successfully. Feb 20 04:35:08 localhost podman[277848]: 2026-02-20 09:35:08.722718369 +0000 UTC m=+0.100917872 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ceilometer_agent_compute) Feb 20 04:35:08 localhost podman[277848]: 2026-02-20 09:35:08.736733927 +0000 UTC m=+0.114933430 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:35:08 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:35:08 localhost python3.9[277847]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:35:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17700 DF PROTO=TCP SPT=52694 DPT=9102 SEQ=307837550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0ACD1E0000000001030307) Feb 20 04:35:09 localhost python3.9[277976]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Feb 20 04:35:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17701 DF PROTO=TCP SPT=52694 DPT=9102 SEQ=307837550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0AD10D0000000001030307) Feb 20 04:35:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:35:10 localhost systemd[1]: tmp-crun.8BDU4E.mount: Deactivated successfully. Feb 20 04:35:10 localhost podman[278087]: 2026-02-20 09:35:10.664030566 +0000 UTC m=+0.106534344 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, release=1770267347, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 20 04:35:10 localhost podman[278087]: 2026-02-20 09:35:10.680851369 +0000 UTC m=+0.123355147 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., distribution-scope=public, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z) Feb 20 04:35:10 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:35:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29680 DF PROTO=TCP SPT=33612 DPT=9102 SEQ=1135383406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0AD40D0000000001030307) Feb 20 04:35:10 localhost python3.9[278086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:35:11 localhost python3.9[278161]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/nova_compute.json _original_basename=.es67gw_w recurse=False state=file path=/var/lib/kolla/config_files/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:35:11 localhost python3.9[278269]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:35:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17702 DF PROTO=TCP SPT=52694 DPT=9102 SEQ=307837550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0AD90D0000000001030307) Feb 20 04:35:12 localhost sshd[278396]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:35:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55997 DF PROTO=TCP SPT=48082 DPT=9102 SEQ=1641673507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0ADC0E0000000001030307) Feb 20 04:35:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:35:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:35:13 localhost podman[278469]: 2026-02-20 09:35:13.009090346 +0000 UTC m=+0.094848888 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:35:13 localhost podman[278469]: 2026-02-20 09:35:13.04558576 +0000 UTC m=+0.131344342 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:35:13 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:35:13 localhost podman[278468]: 2026-02-20 09:35:13.049602047 +0000 UTC m=+0.138797121 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127) Feb 20 04:35:13 localhost podman[278468]: 2026-02-20 09:35:13.147766133 +0000 UTC m=+0.236961227 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller) Feb 20 04:35:13 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:35:14 localhost python3.9[278618]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute config_pattern=*.json debug=False Feb 20 04:35:14 localhost python3.9[278728]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack Feb 20 04:35:15 localhost python3[278838]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute config_id=nova_compute config_overrides={} config_patterns=*.json containers=['nova_compute'] log_base_path=/var/log/containers/stdouts debug=False Feb 20 04:35:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17703 DF PROTO=TCP SPT=52694 DPT=9102 SEQ=307837550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0AE8CD0000000001030307) Feb 20 04:35:16 localhost podman[241347]: time="2026-02-20T09:35:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:35:16 localhost podman[241347]: @ - - [20/Feb/2026:09:35:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 151003 "" "Go-http-client/1.1" Feb 20 04:35:16 localhost podman[241347]: @ - - [20/Feb/2026:09:35:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17271 "" "Go-http-client/1.1" Feb 20 04:35:16 localhost python3[278838]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd",#012 "Digest": "sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2026-01-30T06:31:38.534497001Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1214548351,#012 "VirtualSize": 1214548351,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",#012 "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",#012 "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",#012 "sha256:6cac2e473d63cf2a9b8ef2ea3f4fbc7fb780c57021c3588efd56da3aa8cf8843",#012 "sha256:927dd86a09392106af537557be80232b7e8ca154daa00857c24fe20f9e550a50"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20260127",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2026-01-28T05:56:51.126388624Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:51.126459235Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20260127\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-28T05:56:53.726938221Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2026-01-30T06:10:18.890429494Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890534417Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890553228Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890570688Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890616649Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:18.890659121Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:19.232761948Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2026-01-30T06:10:52.670543613Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.259 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:35:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:35:17 localhost sshd[278900]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:35:18 localhost nova_compute[229929]: 2026-02-20 09:35:18.724 229933 DEBUG oslo_service.periodic_task [None req-3bc05979-6996-456d-920a-19a5037c8474 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:35:19 localhost nova_compute[229929]: 2026-02-20 09:35:19.730 229933 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Feb 20 04:35:19 localhost nova_compute[229929]: 2026-02-20 09:35:19.732 229933 DEBUG oslo_concurrency.lockutils [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:35:19 localhost nova_compute[229929]: 2026-02-20 09:35:19.732 229933 DEBUG oslo_concurrency.lockutils [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:35:19 localhost nova_compute[229929]: 2026-02-20 09:35:19.732 229933 DEBUG oslo_concurrency.lockutils [None req-c604cffd-6667-4ea8-b96c-76e86082213e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:35:19 localhost systemd[1]: tmp-crun.8tJ6QC.mount: Deactivated successfully. Feb 20 04:35:19 localhost podman[279011]: 2026-02-20 09:35:19.994481793 +0000 UTC m=+0.099561184 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, RELEASE=main, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, GIT_CLEAN=True, release=1770267347, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 04:35:20 localhost journal[229026]: End of file while reading data: Input/output error Feb 20 04:35:20 localhost systemd[1]: libpod-1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64.scope: Deactivated successfully. Feb 20 04:35:20 localhost systemd[1]: libpod-1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64.scope: Consumed 15.546s CPU time. Feb 20 04:35:20 localhost podman[278887]: 2026-02-20 09:35:20.107075578 +0000 UTC m=+3.853435278 container died 1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, config_id=nova_compute) Feb 20 04:35:20 localhost podman[279011]: 2026-02-20 09:35:20.128128556 +0000 UTC m=+0.233207897 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:35:20 localhost podman[278887]: 2026-02-20 09:35:20.257269077 +0000 UTC m=+4.003628677 container cleanup 1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:35:20 localhost python3[278838]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop nova_compute Feb 20 04:35:20 localhost podman[279030]: 2026-02-20 09:35:20.420421874 +0000 UTC m=+0.310726677 container cleanup 1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=nova_compute, container_name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 20 04:35:20 localhost podman[279076]: 2026-02-20 09:35:20.44289873 +0000 UTC m=+0.170376794 container remove 1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ea4d4b46c7c04cab2c0f21f3438d6f73404b52ef41d7f6d09559733205477b71'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 20 04:35:20 localhost python3[278838]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Feb 20 04:35:20 localhost podman[279106]: Error: no container with name or ID "nova_compute" found: no such container Feb 20 04:35:20 localhost systemd[1]: edpm_nova_compute.service: Control process exited, code=exited, status=125/n/a Feb 20 04:35:20 localhost podman[279108]: Feb 20 04:35:20 localhost podman[279108]: 2026-02-20 09:35:20.561476567 +0000 UTC m=+0.097102329 container create 2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=nova_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute) Feb 20 04:35:20 localhost podman[279108]: 2026-02-20 09:35:20.516906265 +0000 UTC m=+0.052532037 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Feb 20 04:35:20 localhost python3[278838]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd --label config_id=nova_compute --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Feb 20 04:35:20 localhost podman[279186]: error opening file `/run/crun/2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e/status`: No such file or directory Feb 20 04:35:20 localhost podman[279146]: 2026-02-20 09:35:20.674219985 +0000 UTC m=+0.129965994 container cleanup 2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:35:20 localhost podman[279146]: nova_compute Feb 20 04:35:20 localhost systemd[1]: edpm_nova_compute.service: Failed with result 'exit-code'. Feb 20 04:35:20 localhost systemd[1]: Started libpod-conmon-2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e.scope. Feb 20 04:35:20 localhost systemd[1]: Started libcrun container. Feb 20 04:35:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e71aef76232b9187f1b6c0522426897e147f31bbc545837c710df6c88ed985/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e71aef76232b9187f1b6c0522426897e147f31bbc545837c710df6c88ed985/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e71aef76232b9187f1b6c0522426897e147f31bbc545837c710df6c88ed985/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e71aef76232b9187f1b6c0522426897e147f31bbc545837c710df6c88ed985/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e71aef76232b9187f1b6c0522426897e147f31bbc545837c710df6c88ed985/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:20 localhost podman[279156]: 2026-02-20 09:35:20.747751307 +0000 UTC m=+0.167437324 container init 2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, config_id=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 20 04:35:20 localhost podman[279156]: 2026-02-20 09:35:20.75898323 +0000 UTC m=+0.178669237 container start 2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=nova_compute, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127) Feb 20 04:35:20 localhost nova_compute[279195]: + sudo -E kolla_set_configs Feb 20 04:35:20 localhost python3[278838]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start nova_compute Feb 20 04:35:20 localhost systemd[1]: edpm_nova_compute.service: Scheduled restart job, restart counter is at 1. Feb 20 04:35:20 localhost systemd[1]: Stopped nova_compute container. Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Validating config file Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Copying service configuration files Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Deleting /etc/nova/nova.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /etc/nova/nova.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Copying /var/lib/kolla/config_files/src/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Deleting /etc/ceph Feb 20 04:35:20 localhost systemd[1]: Starting nova_compute container... Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Creating directory /etc/ceph Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /etc/ceph Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Deleting /usr/sbin/iscsiadm Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Writing out command to execute Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:35:20 localhost nova_compute[279195]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 20 04:35:20 localhost nova_compute[279195]: ++ cat /run_command Feb 20 04:35:20 localhost nova_compute[279195]: + CMD=nova-compute Feb 20 04:35:20 localhost nova_compute[279195]: + ARGS= Feb 20 04:35:20 localhost nova_compute[279195]: + sudo kolla_copy_cacerts Feb 20 04:35:20 localhost nova_compute[279195]: + [[ ! -n '' ]] Feb 20 04:35:20 localhost nova_compute[279195]: + . kolla_extend_start Feb 20 04:35:20 localhost nova_compute[279195]: + echo 'Running command: '\''nova-compute'\''' Feb 20 04:35:20 localhost nova_compute[279195]: Running command: 'nova-compute' Feb 20 04:35:20 localhost nova_compute[279195]: + umask 0022 Feb 20 04:35:20 localhost nova_compute[279195]: + exec nova-compute Feb 20 04:35:20 localhost systemd[1]: Started nova_compute container. Feb 20 04:35:20 localhost systemd[1]: tmp-crun.zVTlkv.mount: Deactivated successfully. Feb 20 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay-6a05ec5658aa743546307022d9fca1129054ee3b9d31d4b797362fdf17ebc9ce-merged.mount: Deactivated successfully. Feb 20 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1f1a1047881a5842c37723180a39027a342e5d0e008eb86ed7846a54f5807c64-userdata-shm.mount: Deactivated successfully. Feb 20 04:35:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:35:21 localhost podman[279288]: 2026-02-20 09:35:21.641110167 +0000 UTC m=+0.089426911 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:35:21 localhost podman[279288]: 2026-02-20 09:35:21.676244394 +0000 UTC m=+0.124561098 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:35:21 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:35:22 localhost nova_compute[279195]: 2026-02-20 09:35:22.510 279199 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 20 04:35:22 localhost nova_compute[279195]: 2026-02-20 09:35:22.510 279199 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 20 04:35:22 localhost nova_compute[279195]: 2026-02-20 09:35:22.511 279199 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 20 04:35:22 localhost nova_compute[279195]: 2026-02-20 09:35:22.511 279199 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Feb 20 04:35:22 localhost nova_compute[279195]: 2026-02-20 09:35:22.621 279199 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:35:22 localhost nova_compute[279195]: 2026-02-20 09:35:22.634 279199 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:35:22 localhost nova_compute[279195]: 2026-02-20 09:35:22.634 279199 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Feb 20 04:35:23 localhost python3.9[279425]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.092 279199 INFO nova.virt.driver [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.207 279199 INFO nova.compute.provider_config [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.217 279199 DEBUG oslo_concurrency.lockutils [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.217 279199 DEBUG oslo_concurrency.lockutils [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.217 279199 DEBUG oslo_concurrency.lockutils [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.218 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.218 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.218 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.218 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.218 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.218 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.218 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.218 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.219 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.219 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.219 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.219 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.219 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.219 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.219 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.220 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.220 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.220 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.220 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.220 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] console_host = np0005625202.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.220 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.220 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.220 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.221 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.221 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.221 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.221 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.221 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.221 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.222 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.222 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.222 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.222 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.222 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.222 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.222 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.222 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.223 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.223 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] host = np0005625202.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.223 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.223 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.223 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.223 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.223 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.224 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.224 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.224 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.224 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.224 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.224 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.224 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.225 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.225 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.225 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.225 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.225 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.225 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.225 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.226 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.226 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.226 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.226 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.226 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.226 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.226 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.226 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.227 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.227 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.227 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.227 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.227 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.227 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.227 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.227 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.228 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.228 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.228 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.228 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.228 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.228 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.228 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.229 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.229 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.229 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.229 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.229 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.229 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.229 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.229 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.230 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.230 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.230 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.230 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.230 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.230 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.230 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.230 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.231 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.231 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.231 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.231 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.231 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.231 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.231 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.232 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.232 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.232 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.232 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.232 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.232 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.232 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.233 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.233 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.233 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.233 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.233 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.233 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.233 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.234 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.234 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.234 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.234 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.234 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.234 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.234 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.235 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.235 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.235 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.235 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.235 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.235 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.235 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.235 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.236 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.236 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.236 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.236 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.236 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.236 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.236 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.236 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.237 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.237 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.237 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.237 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.237 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.237 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.237 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.238 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.238 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.238 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.238 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.238 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.238 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.238 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.239 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.239 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.239 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.239 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.239 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.240 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.240 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.240 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.240 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.240 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.240 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.240 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.241 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.241 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.241 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.241 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.241 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.241 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.241 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.242 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.242 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.242 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.242 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.242 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.242 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.242 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.243 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.243 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.243 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.243 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.243 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.243 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.243 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.244 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.244 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.244 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.244 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.244 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.244 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.244 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.244 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.245 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.245 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.245 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.245 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.245 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.245 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.245 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.246 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.246 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.246 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.246 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.246 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.246 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.246 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.246 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.247 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.247 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.247 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.247 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.247 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.247 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.247 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.248 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.248 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.248 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.248 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.248 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.os_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.248 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.248 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.249 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.249 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.249 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.249 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.249 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.249 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.249 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.249 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.250 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.250 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.250 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.250 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.250 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.250 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.250 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.251 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.251 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.251 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.251 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.251 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.251 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.251 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.252 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.252 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.252 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.252 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.252 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.252 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.252 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.253 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.253 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.253 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.253 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.253 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.253 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.254 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.254 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.254 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.254 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.254 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.254 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.254 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.255 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.255 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.255 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.255 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.255 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.255 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.255 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.255 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.256 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.256 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.256 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.256 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.256 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.256 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.256 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.257 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.257 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.257 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.257 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.257 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.257 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.257 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.258 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.258 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.258 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.258 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.258 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.258 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.258 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.258 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.259 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.259 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.259 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.259 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.259 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.259 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.260 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.260 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.260 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.260 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.260 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.260 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.260 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.260 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.261 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.261 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.261 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.261 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.261 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.261 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.261 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.262 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.262 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.262 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.262 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.262 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.262 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.262 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.263 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.263 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.263 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.263 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.263 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.263 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.264 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.264 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.264 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.264 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.264 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.264 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.264 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.265 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.265 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.265 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.265 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.265 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.265 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.265 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.266 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.266 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.266 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.266 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.266 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.266 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.266 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.266 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.267 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.267 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.267 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.267 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.267 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.268 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.268 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.268 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.268 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.268 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.268 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.268 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.269 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.269 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.269 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.269 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.269 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.269 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.269 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.269 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.270 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.270 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.270 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.270 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.270 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.270 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.270 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.270 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.271 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.271 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.271 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.271 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.271 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.271 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.271 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.272 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.272 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.272 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.272 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.272 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.272 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.273 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.barbican_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.273 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.273 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.273 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.273 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.273 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.273 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.274 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.274 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.274 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.274 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.274 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.274 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.274 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.274 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.275 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.275 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.275 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.275 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.275 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.275 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.275 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.275 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.276 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.276 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.276 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.276 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.276 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.276 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.276 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.277 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.277 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.277 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.277 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.277 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.277 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.277 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.277 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.278 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.278 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.278 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.278 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.278 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.278 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.278 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.279 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.279 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.279 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.279 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.279 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.279 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.279 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.279 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.280 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.280 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.280 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.280 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.280 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.280 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.280 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.281 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.281 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.281 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.281 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.281 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.281 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.281 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.281 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.282 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.282 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.282 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.282 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.282 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.282 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.282 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.283 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.283 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.283 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.283 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.283 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.283 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.283 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.283 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.284 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.284 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.284 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.284 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.284 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.284 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.284 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.285 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.285 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.285 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.285 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.285 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.285 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.285 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.286 279199 WARNING oslo_config.cfg [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Feb 20 04:35:23 localhost nova_compute[279195]: live_migration_uri is deprecated for removal in favor of two other options that Feb 20 04:35:23 localhost nova_compute[279195]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Feb 20 04:35:23 localhost nova_compute[279195]: and ``live_migration_inbound_addr`` respectively. Feb 20 04:35:23 localhost nova_compute[279195]: ). Its value may be silently ignored in the future.#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.286 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.286 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.286 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.286 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.286 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.287 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.287 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.287 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.287 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.287 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.287 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.287 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.287 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.288 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.288 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.288 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.288 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.288 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.288 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.rbd_secret_uuid = a8557ee9-b55d-5519-942c-cf8f6172f1d8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.288 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.289 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.289 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.289 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.289 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.289 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.289 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.289 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.290 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.290 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.290 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.290 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.291 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.291 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.291 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.291 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.292 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.292 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.292 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.292 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.292 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.292 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.292 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.293 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.293 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.293 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.293 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.293 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.293 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.293 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.294 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.294 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.294 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.294 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.294 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.294 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.294 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.295 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.295 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.295 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.295 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.295 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.295 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.295 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.296 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.296 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.296 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.296 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.296 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.296 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.296 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.296 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.297 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.297 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.297 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.297 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.297 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.297 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.297 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.298 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.298 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.298 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.298 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.298 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.298 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.298 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.299 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.299 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.299 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.299 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.299 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.299 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.300 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.300 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.300 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.300 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.300 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.300 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.300 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.301 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.301 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.301 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.301 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.301 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.301 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.302 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.302 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.302 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.302 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.302 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.302 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.302 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.303 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.303 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.303 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.303 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.303 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.303 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.303 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.303 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.304 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.304 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.304 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.304 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.304 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.304 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.304 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.305 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.305 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.305 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.305 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.305 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.305 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.305 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.306 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.306 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.306 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.306 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.306 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.306 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.307 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.307 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.307 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.307 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.307 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.307 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.307 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.308 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.308 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.308 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.308 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.308 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.308 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.308 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.309 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.309 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.310 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.310 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.310 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.310 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.311 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.311 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.311 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.311 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.311 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.311 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.311 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.312 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.312 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.312 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.312 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.312 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.312 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.313 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.313 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.313 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.313 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.313 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.313 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.313 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.314 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.314 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.314 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.314 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.314 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.314 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.315 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.315 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.315 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.315 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.315 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.315 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.315 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.316 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.316 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.316 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.316 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.316 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.316 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.317 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.317 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.317 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.317 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.317 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.317 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.317 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.318 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.318 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.318 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.318 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.318 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.319 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.319 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.319 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.319 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.319 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.319 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.319 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.319 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.320 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.320 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.320 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.320 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.320 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.320 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.320 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.321 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.321 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.321 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.321 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.321 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.321 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.321 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.321 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.322 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.322 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.322 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.322 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.322 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.322 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.323 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.323 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.323 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.323 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.323 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.323 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.324 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.324 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.324 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.324 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.324 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.325 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.325 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.325 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.325 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.325 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.325 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.326 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.326 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.326 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.326 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.326 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.326 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.327 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.327 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.327 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.327 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.327 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.327 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.327 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.327 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.328 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.328 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.328 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.328 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.328 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.328 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.328 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.329 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.329 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.329 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.329 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.329 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.329 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.329 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.330 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.330 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.330 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.330 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.330 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.330 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.330 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.330 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.331 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.331 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.331 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.331 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.331 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.331 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.331 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.332 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.332 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.332 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.332 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.332 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.332 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.332 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.333 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.333 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.333 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.333 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.333 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.333 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.333 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.333 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.334 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.334 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.334 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.334 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.334 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.334 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.334 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.335 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.335 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.335 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.335 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.335 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.335 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.335 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.336 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.336 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.336 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.336 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.336 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.336 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.336 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.337 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.337 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.337 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.337 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.337 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.337 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.337 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.338 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.338 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.338 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.338 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.338 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.338 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.338 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.339 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.339 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.339 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.339 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.339 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.339 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.339 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.339 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.340 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.340 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.340 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.340 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.340 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.340 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.340 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.340 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.341 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.341 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.341 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.341 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.341 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.341 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.341 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.342 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.342 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.342 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.342 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.342 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.342 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.342 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.342 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.343 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.343 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.343 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.343 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.343 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.343 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.343 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.344 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.344 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.344 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.344 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.344 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.344 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.344 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.344 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.345 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.345 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.345 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.345 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.345 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.345 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.345 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.346 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.346 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.346 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.346 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.346 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.346 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.346 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.347 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.347 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.347 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.347 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.347 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.347 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.347 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.347 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.348 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.348 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.348 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.348 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.348 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.348 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.348 279199 DEBUG oslo_service.service [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.349 279199 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.368 279199 INFO nova.virt.node [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Determined node identity 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from /var/lib/nova/compute_id#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.369 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.369 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.370 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.370 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.384 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.388 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.389 279199 INFO nova.virt.libvirt.driver [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Connection event '1' reason 'None'#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.398 279199 INFO nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Libvirt host capabilities Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 61530aa3-6295-40fa-9f19-edfd227b2bca Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: x86_64 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v4 Feb 20 04:35:23 localhost nova_compute[279195]: AMD Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: tcp Feb 20 04:35:23 localhost nova_compute[279195]: rdma Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 16116612 Feb 20 04:35:23 localhost nova_compute[279195]: 4029153 Feb 20 04:35:23 localhost nova_compute[279195]: 0 Feb 20 04:35:23 localhost nova_compute[279195]: 0 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: selinux Feb 20 04:35:23 localhost nova_compute[279195]: 0 Feb 20 04:35:23 localhost nova_compute[279195]: system_u:system_r:svirt_t:s0 Feb 20 04:35:23 localhost nova_compute[279195]: system_u:system_r:svirt_tcg_t:s0 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: dac Feb 20 04:35:23 localhost nova_compute[279195]: 0 Feb 20 04:35:23 localhost nova_compute[279195]: +107:+107 Feb 20 04:35:23 localhost nova_compute[279195]: +107:+107 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: hvm Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 32 Feb 20 04:35:23 localhost nova_compute[279195]: /usr/libexec/qemu-kvm Feb 20 04:35:23 localhost nova_compute[279195]: pc-i440fx-rhel7.6.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel9.8.0 Feb 20 04:35:23 localhost nova_compute[279195]: q35 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel9.6.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.6.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel9.4.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.5.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.3.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel7.6.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.4.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel9.2.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.2.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel9.0.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.0.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.1.0 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: hvm Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 64 Feb 20 04:35:23 localhost nova_compute[279195]: /usr/libexec/qemu-kvm Feb 20 04:35:23 localhost nova_compute[279195]: pc-i440fx-rhel7.6.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel9.8.0 Feb 20 04:35:23 localhost nova_compute[279195]: q35 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel9.6.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.6.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel9.4.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.5.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.3.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel7.6.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.4.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel9.2.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.2.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel9.0.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.0.0 Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel8.1.0 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: #033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.408 279199 DEBUG nova.virt.libvirt.volume.mount [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.411 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.418 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: /usr/libexec/qemu-kvm Feb 20 04:35:23 localhost nova_compute[279195]: kvm Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel9.8.0 Feb 20 04:35:23 localhost nova_compute[279195]: i686 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: rom Feb 20 04:35:23 localhost nova_compute[279195]: pflash Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: yes Feb 20 04:35:23 localhost nova_compute[279195]: no Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: no Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: on Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: on Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome Feb 20 04:35:23 localhost nova_compute[279195]: AMD Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 486 Feb 20 04:35:23 localhost nova_compute[279195]: 486-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: ClearwaterForest Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: ClearwaterForest-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Conroe Feb 20 04:35:23 localhost nova_compute[279195]: Conroe-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Cooperlake Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cooperlake-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cooperlake-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Dhyana Feb 20 04:35:23 localhost nova_compute[279195]: Dhyana-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Dhyana-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Genoa Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Genoa-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Genoa-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-IBPB Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v4 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v5 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Turin Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Turin-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v1 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v2 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v6 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v7 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: KnightsMill Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: KnightsMill-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G1-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G2 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G2-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G3 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G3-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G4-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G5-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Penryn Feb 20 04:35:23 localhost nova_compute[279195]: Penryn-v1 Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge-v1 Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge-v2 Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Westmere Feb 20 04:35:23 localhost nova_compute[279195]: Westmere-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Westmere-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Westmere-v2 Feb 20 04:35:23 localhost nova_compute[279195]: athlon Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: athlon-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: core2duo Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: core2duo-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: coreduo Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: coreduo-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: kvm32 Feb 20 04:35:23 localhost nova_compute[279195]: kvm32-v1 Feb 20 04:35:23 localhost nova_compute[279195]: kvm64 Feb 20 04:35:23 localhost nova_compute[279195]: kvm64-v1 Feb 20 04:35:23 localhost nova_compute[279195]: n270 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: n270-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: pentium Feb 20 04:35:23 localhost nova_compute[279195]: pentium-v1 Feb 20 04:35:23 localhost nova_compute[279195]: pentium2 Feb 20 04:35:23 localhost nova_compute[279195]: pentium2-v1 Feb 20 04:35:23 localhost nova_compute[279195]: pentium3 Feb 20 04:35:23 localhost nova_compute[279195]: pentium3-v1 Feb 20 04:35:23 localhost nova_compute[279195]: phenom Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: phenom-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: qemu32 Feb 20 04:35:23 localhost nova_compute[279195]: qemu32-v1 Feb 20 04:35:23 localhost nova_compute[279195]: qemu64 Feb 20 04:35:23 localhost nova_compute[279195]: qemu64-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: file Feb 20 04:35:23 localhost nova_compute[279195]: anonymous Feb 20 04:35:23 localhost nova_compute[279195]: memfd Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: disk Feb 20 04:35:23 localhost nova_compute[279195]: cdrom Feb 20 04:35:23 localhost nova_compute[279195]: floppy Feb 20 04:35:23 localhost nova_compute[279195]: lun Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: fdc Feb 20 04:35:23 localhost nova_compute[279195]: scsi Feb 20 04:35:23 localhost nova_compute[279195]: virtio Feb 20 04:35:23 localhost nova_compute[279195]: usb Feb 20 04:35:23 localhost nova_compute[279195]: sata Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: virtio Feb 20 04:35:23 localhost nova_compute[279195]: virtio-transitional Feb 20 04:35:23 localhost nova_compute[279195]: virtio-non-transitional Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: vnc Feb 20 04:35:23 localhost nova_compute[279195]: egl-headless Feb 20 04:35:23 localhost nova_compute[279195]: dbus Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: subsystem Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: default Feb 20 04:35:23 localhost nova_compute[279195]: mandatory Feb 20 04:35:23 localhost nova_compute[279195]: requisite Feb 20 04:35:23 localhost nova_compute[279195]: optional Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: usb Feb 20 04:35:23 localhost nova_compute[279195]: pci Feb 20 04:35:23 localhost nova_compute[279195]: scsi Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: virtio Feb 20 04:35:23 localhost nova_compute[279195]: virtio-transitional Feb 20 04:35:23 localhost nova_compute[279195]: virtio-non-transitional Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: random Feb 20 04:35:23 localhost nova_compute[279195]: egd Feb 20 04:35:23 localhost nova_compute[279195]: builtin Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: path Feb 20 04:35:23 localhost nova_compute[279195]: handle Feb 20 04:35:23 localhost nova_compute[279195]: virtiofs Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: tpm-tis Feb 20 04:35:23 localhost nova_compute[279195]: tpm-crb Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: emulator Feb 20 04:35:23 localhost nova_compute[279195]: external Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 2.0 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: usb Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: pty Feb 20 04:35:23 localhost nova_compute[279195]: unix Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: qemu Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: builtin Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: default Feb 20 04:35:23 localhost nova_compute[279195]: passt Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: isa Feb 20 04:35:23 localhost nova_compute[279195]: hyperv Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: null Feb 20 04:35:23 localhost nova_compute[279195]: vc Feb 20 04:35:23 localhost nova_compute[279195]: pty Feb 20 04:35:23 localhost nova_compute[279195]: dev Feb 20 04:35:23 localhost nova_compute[279195]: file Feb 20 04:35:23 localhost nova_compute[279195]: pipe Feb 20 04:35:23 localhost nova_compute[279195]: stdio Feb 20 04:35:23 localhost nova_compute[279195]: udp Feb 20 04:35:23 localhost nova_compute[279195]: tcp Feb 20 04:35:23 localhost nova_compute[279195]: unix Feb 20 04:35:23 localhost nova_compute[279195]: qemu-vdagent Feb 20 04:35:23 localhost nova_compute[279195]: dbus Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: relaxed Feb 20 04:35:23 localhost nova_compute[279195]: vapic Feb 20 04:35:23 localhost nova_compute[279195]: spinlocks Feb 20 04:35:23 localhost nova_compute[279195]: vpindex Feb 20 04:35:23 localhost nova_compute[279195]: runtime Feb 20 04:35:23 localhost nova_compute[279195]: synic Feb 20 04:35:23 localhost nova_compute[279195]: stimer Feb 20 04:35:23 localhost nova_compute[279195]: reset Feb 20 04:35:23 localhost nova_compute[279195]: vendor_id Feb 20 04:35:23 localhost nova_compute[279195]: frequencies Feb 20 04:35:23 localhost nova_compute[279195]: reenlightenment Feb 20 04:35:23 localhost nova_compute[279195]: tlbflush Feb 20 04:35:23 localhost nova_compute[279195]: ipi Feb 20 04:35:23 localhost nova_compute[279195]: avic Feb 20 04:35:23 localhost nova_compute[279195]: emsr_bitmap Feb 20 04:35:23 localhost nova_compute[279195]: xmm_input Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 4095 Feb 20 04:35:23 localhost nova_compute[279195]: on Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: Linux KVM Hv Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.427 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: /usr/libexec/qemu-kvm Feb 20 04:35:23 localhost nova_compute[279195]: kvm Feb 20 04:35:23 localhost nova_compute[279195]: pc-i440fx-rhel7.6.0 Feb 20 04:35:23 localhost nova_compute[279195]: i686 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: rom Feb 20 04:35:23 localhost nova_compute[279195]: pflash Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: yes Feb 20 04:35:23 localhost nova_compute[279195]: no Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: no Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: on Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: on Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome Feb 20 04:35:23 localhost nova_compute[279195]: AMD Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 486 Feb 20 04:35:23 localhost nova_compute[279195]: 486-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: ClearwaterForest Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: ClearwaterForest-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Conroe Feb 20 04:35:23 localhost nova_compute[279195]: Conroe-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Cooperlake Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cooperlake-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cooperlake-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Dhyana Feb 20 04:35:23 localhost nova_compute[279195]: Dhyana-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Dhyana-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Genoa Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Genoa-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Genoa-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-IBPB Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v4 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v5 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Turin Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Turin-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v1 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v2 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v6 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v7 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: KnightsMill Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: KnightsMill-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G1-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G2 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G2-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G3 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G3-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G4-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G5-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Penryn Feb 20 04:35:23 localhost nova_compute[279195]: Penryn-v1 Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge-v1 Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge-v2 Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Westmere Feb 20 04:35:23 localhost nova_compute[279195]: Westmere-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Westmere-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Westmere-v2 Feb 20 04:35:23 localhost nova_compute[279195]: athlon Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: athlon-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: core2duo Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: core2duo-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: coreduo Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: coreduo-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: kvm32 Feb 20 04:35:23 localhost nova_compute[279195]: kvm32-v1 Feb 20 04:35:23 localhost nova_compute[279195]: kvm64 Feb 20 04:35:23 localhost nova_compute[279195]: kvm64-v1 Feb 20 04:35:23 localhost nova_compute[279195]: n270 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: n270-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: pentium Feb 20 04:35:23 localhost nova_compute[279195]: pentium-v1 Feb 20 04:35:23 localhost nova_compute[279195]: pentium2 Feb 20 04:35:23 localhost nova_compute[279195]: pentium2-v1 Feb 20 04:35:23 localhost nova_compute[279195]: pentium3 Feb 20 04:35:23 localhost nova_compute[279195]: pentium3-v1 Feb 20 04:35:23 localhost nova_compute[279195]: phenom Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: phenom-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: qemu32 Feb 20 04:35:23 localhost nova_compute[279195]: qemu32-v1 Feb 20 04:35:23 localhost nova_compute[279195]: qemu64 Feb 20 04:35:23 localhost nova_compute[279195]: qemu64-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: file Feb 20 04:35:23 localhost nova_compute[279195]: anonymous Feb 20 04:35:23 localhost nova_compute[279195]: memfd Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: disk Feb 20 04:35:23 localhost nova_compute[279195]: cdrom Feb 20 04:35:23 localhost nova_compute[279195]: floppy Feb 20 04:35:23 localhost nova_compute[279195]: lun Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: ide Feb 20 04:35:23 localhost nova_compute[279195]: fdc Feb 20 04:35:23 localhost nova_compute[279195]: scsi Feb 20 04:35:23 localhost nova_compute[279195]: virtio Feb 20 04:35:23 localhost nova_compute[279195]: usb Feb 20 04:35:23 localhost nova_compute[279195]: sata Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: virtio Feb 20 04:35:23 localhost nova_compute[279195]: virtio-transitional Feb 20 04:35:23 localhost nova_compute[279195]: virtio-non-transitional Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: vnc Feb 20 04:35:23 localhost nova_compute[279195]: egl-headless Feb 20 04:35:23 localhost nova_compute[279195]: dbus Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: subsystem Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: default Feb 20 04:35:23 localhost nova_compute[279195]: mandatory Feb 20 04:35:23 localhost nova_compute[279195]: requisite Feb 20 04:35:23 localhost nova_compute[279195]: optional Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: usb Feb 20 04:35:23 localhost nova_compute[279195]: pci Feb 20 04:35:23 localhost nova_compute[279195]: scsi Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: virtio Feb 20 04:35:23 localhost nova_compute[279195]: virtio-transitional Feb 20 04:35:23 localhost nova_compute[279195]: virtio-non-transitional Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: random Feb 20 04:35:23 localhost nova_compute[279195]: egd Feb 20 04:35:23 localhost nova_compute[279195]: builtin Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: path Feb 20 04:35:23 localhost nova_compute[279195]: handle Feb 20 04:35:23 localhost nova_compute[279195]: virtiofs Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: tpm-tis Feb 20 04:35:23 localhost nova_compute[279195]: tpm-crb Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: emulator Feb 20 04:35:23 localhost nova_compute[279195]: external Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 2.0 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: usb Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: pty Feb 20 04:35:23 localhost nova_compute[279195]: unix Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: qemu Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: builtin Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: default Feb 20 04:35:23 localhost nova_compute[279195]: passt Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: isa Feb 20 04:35:23 localhost nova_compute[279195]: hyperv Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: null Feb 20 04:35:23 localhost nova_compute[279195]: vc Feb 20 04:35:23 localhost nova_compute[279195]: pty Feb 20 04:35:23 localhost nova_compute[279195]: dev Feb 20 04:35:23 localhost nova_compute[279195]: file Feb 20 04:35:23 localhost nova_compute[279195]: pipe Feb 20 04:35:23 localhost nova_compute[279195]: stdio Feb 20 04:35:23 localhost nova_compute[279195]: udp Feb 20 04:35:23 localhost nova_compute[279195]: tcp Feb 20 04:35:23 localhost nova_compute[279195]: unix Feb 20 04:35:23 localhost nova_compute[279195]: qemu-vdagent Feb 20 04:35:23 localhost nova_compute[279195]: dbus Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: relaxed Feb 20 04:35:23 localhost nova_compute[279195]: vapic Feb 20 04:35:23 localhost nova_compute[279195]: spinlocks Feb 20 04:35:23 localhost nova_compute[279195]: vpindex Feb 20 04:35:23 localhost nova_compute[279195]: runtime Feb 20 04:35:23 localhost nova_compute[279195]: synic Feb 20 04:35:23 localhost nova_compute[279195]: stimer Feb 20 04:35:23 localhost nova_compute[279195]: reset Feb 20 04:35:23 localhost nova_compute[279195]: vendor_id Feb 20 04:35:23 localhost nova_compute[279195]: frequencies Feb 20 04:35:23 localhost nova_compute[279195]: reenlightenment Feb 20 04:35:23 localhost nova_compute[279195]: tlbflush Feb 20 04:35:23 localhost nova_compute[279195]: ipi Feb 20 04:35:23 localhost nova_compute[279195]: avic Feb 20 04:35:23 localhost nova_compute[279195]: emsr_bitmap Feb 20 04:35:23 localhost nova_compute[279195]: xmm_input Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 4095 Feb 20 04:35:23 localhost nova_compute[279195]: on Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: Linux KVM Hv Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.501 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.506 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: /usr/libexec/qemu-kvm Feb 20 04:35:23 localhost nova_compute[279195]: kvm Feb 20 04:35:23 localhost nova_compute[279195]: pc-q35-rhel9.8.0 Feb 20 04:35:23 localhost nova_compute[279195]: x86_64 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: efi Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Feb 20 04:35:23 localhost nova_compute[279195]: /usr/share/edk2/ovmf/OVMF_CODE.fd Feb 20 04:35:23 localhost nova_compute[279195]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Feb 20 04:35:23 localhost nova_compute[279195]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: rom Feb 20 04:35:23 localhost nova_compute[279195]: pflash Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: yes Feb 20 04:35:23 localhost nova_compute[279195]: no Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: yes Feb 20 04:35:23 localhost nova_compute[279195]: no Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: on Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: on Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome Feb 20 04:35:23 localhost nova_compute[279195]: AMD Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 486 Feb 20 04:35:23 localhost nova_compute[279195]: 486-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: ClearwaterForest Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: ClearwaterForest-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Conroe Feb 20 04:35:23 localhost nova_compute[279195]: Conroe-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Cooperlake Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cooperlake-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cooperlake-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Dhyana Feb 20 04:35:23 localhost nova_compute[279195]: Dhyana-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Dhyana-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Genoa Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Genoa-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Genoa-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-IBPB Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v4 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v5 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Turin Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Turin-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v1 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v2 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v6 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v7 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: KnightsMill Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: KnightsMill-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G1-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G2 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G2-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G3 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G3-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G4-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G5-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Penryn Feb 20 04:35:23 localhost nova_compute[279195]: Penryn-v1 Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge-v1 Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge-v2 Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Westmere Feb 20 04:35:23 localhost nova_compute[279195]: Westmere-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Westmere-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Westmere-v2 Feb 20 04:35:23 localhost nova_compute[279195]: athlon Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: athlon-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: core2duo Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: core2duo-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: coreduo Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: coreduo-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: kvm32 Feb 20 04:35:23 localhost nova_compute[279195]: kvm32-v1 Feb 20 04:35:23 localhost nova_compute[279195]: kvm64 Feb 20 04:35:23 localhost nova_compute[279195]: kvm64-v1 Feb 20 04:35:23 localhost nova_compute[279195]: n270 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: n270-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: pentium Feb 20 04:35:23 localhost nova_compute[279195]: pentium-v1 Feb 20 04:35:23 localhost nova_compute[279195]: pentium2 Feb 20 04:35:23 localhost nova_compute[279195]: pentium2-v1 Feb 20 04:35:23 localhost nova_compute[279195]: pentium3 Feb 20 04:35:23 localhost nova_compute[279195]: pentium3-v1 Feb 20 04:35:23 localhost nova_compute[279195]: phenom Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: phenom-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: qemu32 Feb 20 04:35:23 localhost nova_compute[279195]: qemu32-v1 Feb 20 04:35:23 localhost nova_compute[279195]: qemu64 Feb 20 04:35:23 localhost nova_compute[279195]: qemu64-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: file Feb 20 04:35:23 localhost nova_compute[279195]: anonymous Feb 20 04:35:23 localhost nova_compute[279195]: memfd Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: disk Feb 20 04:35:23 localhost nova_compute[279195]: cdrom Feb 20 04:35:23 localhost nova_compute[279195]: floppy Feb 20 04:35:23 localhost nova_compute[279195]: lun Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: fdc Feb 20 04:35:23 localhost nova_compute[279195]: scsi Feb 20 04:35:23 localhost nova_compute[279195]: virtio Feb 20 04:35:23 localhost nova_compute[279195]: usb Feb 20 04:35:23 localhost nova_compute[279195]: sata Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: virtio Feb 20 04:35:23 localhost nova_compute[279195]: virtio-transitional Feb 20 04:35:23 localhost nova_compute[279195]: virtio-non-transitional Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: vnc Feb 20 04:35:23 localhost nova_compute[279195]: egl-headless Feb 20 04:35:23 localhost nova_compute[279195]: dbus Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: subsystem Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: default Feb 20 04:35:23 localhost nova_compute[279195]: mandatory Feb 20 04:35:23 localhost nova_compute[279195]: requisite Feb 20 04:35:23 localhost nova_compute[279195]: optional Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: usb Feb 20 04:35:23 localhost nova_compute[279195]: pci Feb 20 04:35:23 localhost nova_compute[279195]: scsi Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: virtio Feb 20 04:35:23 localhost nova_compute[279195]: virtio-transitional Feb 20 04:35:23 localhost nova_compute[279195]: virtio-non-transitional Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: random Feb 20 04:35:23 localhost nova_compute[279195]: egd Feb 20 04:35:23 localhost nova_compute[279195]: builtin Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: path Feb 20 04:35:23 localhost nova_compute[279195]: handle Feb 20 04:35:23 localhost nova_compute[279195]: virtiofs Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: tpm-tis Feb 20 04:35:23 localhost nova_compute[279195]: tpm-crb Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: emulator Feb 20 04:35:23 localhost nova_compute[279195]: external Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 2.0 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: usb Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: pty Feb 20 04:35:23 localhost nova_compute[279195]: unix Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: qemu Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: builtin Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: default Feb 20 04:35:23 localhost nova_compute[279195]: passt Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: isa Feb 20 04:35:23 localhost nova_compute[279195]: hyperv Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: null Feb 20 04:35:23 localhost nova_compute[279195]: vc Feb 20 04:35:23 localhost nova_compute[279195]: pty Feb 20 04:35:23 localhost nova_compute[279195]: dev Feb 20 04:35:23 localhost nova_compute[279195]: file Feb 20 04:35:23 localhost nova_compute[279195]: pipe Feb 20 04:35:23 localhost nova_compute[279195]: stdio Feb 20 04:35:23 localhost nova_compute[279195]: udp Feb 20 04:35:23 localhost nova_compute[279195]: tcp Feb 20 04:35:23 localhost nova_compute[279195]: unix Feb 20 04:35:23 localhost nova_compute[279195]: qemu-vdagent Feb 20 04:35:23 localhost nova_compute[279195]: dbus Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: relaxed Feb 20 04:35:23 localhost nova_compute[279195]: vapic Feb 20 04:35:23 localhost nova_compute[279195]: spinlocks Feb 20 04:35:23 localhost nova_compute[279195]: vpindex Feb 20 04:35:23 localhost nova_compute[279195]: runtime Feb 20 04:35:23 localhost nova_compute[279195]: synic Feb 20 04:35:23 localhost nova_compute[279195]: stimer Feb 20 04:35:23 localhost nova_compute[279195]: reset Feb 20 04:35:23 localhost nova_compute[279195]: vendor_id Feb 20 04:35:23 localhost nova_compute[279195]: frequencies Feb 20 04:35:23 localhost nova_compute[279195]: reenlightenment Feb 20 04:35:23 localhost nova_compute[279195]: tlbflush Feb 20 04:35:23 localhost nova_compute[279195]: ipi Feb 20 04:35:23 localhost nova_compute[279195]: avic Feb 20 04:35:23 localhost nova_compute[279195]: emsr_bitmap Feb 20 04:35:23 localhost nova_compute[279195]: xmm_input Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 4095 Feb 20 04:35:23 localhost nova_compute[279195]: on Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: Linux KVM Hv Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.582 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: /usr/libexec/qemu-kvm Feb 20 04:35:23 localhost nova_compute[279195]: kvm Feb 20 04:35:23 localhost nova_compute[279195]: pc-i440fx-rhel7.6.0 Feb 20 04:35:23 localhost nova_compute[279195]: x86_64 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: rom Feb 20 04:35:23 localhost nova_compute[279195]: pflash Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: yes Feb 20 04:35:23 localhost nova_compute[279195]: no Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: no Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: on Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: on Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome Feb 20 04:35:23 localhost nova_compute[279195]: AMD Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 486 Feb 20 04:35:23 localhost nova_compute[279195]: 486-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Broadwell-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cascadelake-Server-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: ClearwaterForest Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: ClearwaterForest-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Conroe Feb 20 04:35:23 localhost nova_compute[279195]: Conroe-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Cooperlake Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cooperlake-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Cooperlake-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Denverton-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Dhyana Feb 20 04:35:23 localhost nova_compute[279195]: Dhyana-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Dhyana-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Genoa Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Genoa-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Genoa-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-IBPB Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Milan-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v4 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Rome-v5 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Turin Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-Turin-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v1 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v2 Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: EPYC-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: GraniteRapids-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Haswell-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-noTSX Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v6 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Icelake-Server-v7 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: IvyBridge-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: KnightsMill Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: KnightsMill-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Nehalem-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G1-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G2 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G2-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G3 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G3-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G4-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Opteron_G5-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Penryn Feb 20 04:35:23 localhost nova_compute[279195]: Penryn-v1 Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge-v1 Feb 20 04:35:23 localhost nova_compute[279195]: SandyBridge-v2 Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SapphireRapids-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: SierraForest-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Client-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-noTSX-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Skylake-Server-v5 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v2 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v3 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Snowridge-v4 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Westmere Feb 20 04:35:23 localhost nova_compute[279195]: Westmere-IBRS Feb 20 04:35:23 localhost nova_compute[279195]: Westmere-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Westmere-v2 Feb 20 04:35:23 localhost nova_compute[279195]: athlon Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: athlon-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: core2duo Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: core2duo-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: coreduo Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: coreduo-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: kvm32 Feb 20 04:35:23 localhost nova_compute[279195]: kvm32-v1 Feb 20 04:35:23 localhost nova_compute[279195]: kvm64 Feb 20 04:35:23 localhost nova_compute[279195]: kvm64-v1 Feb 20 04:35:23 localhost nova_compute[279195]: n270 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: n270-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: pentium Feb 20 04:35:23 localhost nova_compute[279195]: pentium-v1 Feb 20 04:35:23 localhost nova_compute[279195]: pentium2 Feb 20 04:35:23 localhost nova_compute[279195]: pentium2-v1 Feb 20 04:35:23 localhost nova_compute[279195]: pentium3 Feb 20 04:35:23 localhost nova_compute[279195]: pentium3-v1 Feb 20 04:35:23 localhost nova_compute[279195]: phenom Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: phenom-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: qemu32 Feb 20 04:35:23 localhost nova_compute[279195]: qemu32-v1 Feb 20 04:35:23 localhost nova_compute[279195]: qemu64 Feb 20 04:35:23 localhost nova_compute[279195]: qemu64-v1 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: file Feb 20 04:35:23 localhost nova_compute[279195]: anonymous Feb 20 04:35:23 localhost nova_compute[279195]: memfd Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: disk Feb 20 04:35:23 localhost nova_compute[279195]: cdrom Feb 20 04:35:23 localhost nova_compute[279195]: floppy Feb 20 04:35:23 localhost nova_compute[279195]: lun Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: ide Feb 20 04:35:23 localhost nova_compute[279195]: fdc Feb 20 04:35:23 localhost nova_compute[279195]: scsi Feb 20 04:35:23 localhost nova_compute[279195]: virtio Feb 20 04:35:23 localhost nova_compute[279195]: usb Feb 20 04:35:23 localhost nova_compute[279195]: sata Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: virtio Feb 20 04:35:23 localhost nova_compute[279195]: virtio-transitional Feb 20 04:35:23 localhost nova_compute[279195]: virtio-non-transitional Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: vnc Feb 20 04:35:23 localhost nova_compute[279195]: egl-headless Feb 20 04:35:23 localhost nova_compute[279195]: dbus Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: subsystem Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: default Feb 20 04:35:23 localhost nova_compute[279195]: mandatory Feb 20 04:35:23 localhost nova_compute[279195]: requisite Feb 20 04:35:23 localhost nova_compute[279195]: optional Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: usb Feb 20 04:35:23 localhost nova_compute[279195]: pci Feb 20 04:35:23 localhost nova_compute[279195]: scsi Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: virtio Feb 20 04:35:23 localhost nova_compute[279195]: virtio-transitional Feb 20 04:35:23 localhost nova_compute[279195]: virtio-non-transitional Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: random Feb 20 04:35:23 localhost nova_compute[279195]: egd Feb 20 04:35:23 localhost nova_compute[279195]: builtin Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: path Feb 20 04:35:23 localhost nova_compute[279195]: handle Feb 20 04:35:23 localhost nova_compute[279195]: virtiofs Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: tpm-tis Feb 20 04:35:23 localhost nova_compute[279195]: tpm-crb Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: emulator Feb 20 04:35:23 localhost nova_compute[279195]: external Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 2.0 Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: usb Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: pty Feb 20 04:35:23 localhost nova_compute[279195]: unix Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: qemu Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: builtin Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: default Feb 20 04:35:23 localhost nova_compute[279195]: passt Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: isa Feb 20 04:35:23 localhost nova_compute[279195]: hyperv Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: null Feb 20 04:35:23 localhost nova_compute[279195]: vc Feb 20 04:35:23 localhost nova_compute[279195]: pty Feb 20 04:35:23 localhost nova_compute[279195]: dev Feb 20 04:35:23 localhost nova_compute[279195]: file Feb 20 04:35:23 localhost nova_compute[279195]: pipe Feb 20 04:35:23 localhost nova_compute[279195]: stdio Feb 20 04:35:23 localhost nova_compute[279195]: udp Feb 20 04:35:23 localhost nova_compute[279195]: tcp Feb 20 04:35:23 localhost nova_compute[279195]: unix Feb 20 04:35:23 localhost nova_compute[279195]: qemu-vdagent Feb 20 04:35:23 localhost nova_compute[279195]: dbus Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: relaxed Feb 20 04:35:23 localhost nova_compute[279195]: vapic Feb 20 04:35:23 localhost nova_compute[279195]: spinlocks Feb 20 04:35:23 localhost nova_compute[279195]: vpindex Feb 20 04:35:23 localhost nova_compute[279195]: runtime Feb 20 04:35:23 localhost nova_compute[279195]: synic Feb 20 04:35:23 localhost nova_compute[279195]: stimer Feb 20 04:35:23 localhost nova_compute[279195]: reset Feb 20 04:35:23 localhost nova_compute[279195]: vendor_id Feb 20 04:35:23 localhost nova_compute[279195]: frequencies Feb 20 04:35:23 localhost nova_compute[279195]: reenlightenment Feb 20 04:35:23 localhost nova_compute[279195]: tlbflush Feb 20 04:35:23 localhost nova_compute[279195]: ipi Feb 20 04:35:23 localhost nova_compute[279195]: avic Feb 20 04:35:23 localhost nova_compute[279195]: emsr_bitmap Feb 20 04:35:23 localhost nova_compute[279195]: xmm_input Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: 4095 Feb 20 04:35:23 localhost nova_compute[279195]: on Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: off Feb 20 04:35:23 localhost nova_compute[279195]: Linux KVM Hv Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: Feb 20 04:35:23 localhost nova_compute[279195]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.647 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.647 279199 INFO nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Secure Boot support detected#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.652 279199 INFO nova.virt.libvirt.driver [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.653 279199 INFO nova.virt.libvirt.driver [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.665 279199 DEBUG nova.virt.libvirt.driver [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.693 279199 INFO nova.virt.node [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Determined node identity 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from /var/lib/nova/compute_id#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.710 279199 DEBUG nova.compute.manager [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Verified node 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 matches my host np0005625202.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.739 279199 INFO nova.compute.manager [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.816 279199 DEBUG oslo_concurrency.lockutils [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.816 279199 DEBUG oslo_concurrency.lockutils [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.817 279199 DEBUG oslo_concurrency.lockutils [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.817 279199 DEBUG nova.compute.resource_tracker [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:35:23 localhost nova_compute[279195]: 2026-02-20 09:35:23.818 279199 DEBUG oslo_concurrency.processutils [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:35:23 localhost python3.9[279558]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:35:24 localhost python3.9[279633]: ansible-stat Invoked with path=/etc/systemd/system/edpm_nova_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:35:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17704 DF PROTO=TCP SPT=52694 DPT=9102 SEQ=307837550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0B0A0E0000000001030307) Feb 20 04:35:24 localhost nova_compute[279195]: 2026-02-20 09:35:24.665 279199 DEBUG oslo_concurrency.processutils [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.847s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:35:24 localhost nova_compute[279195]: 2026-02-20 09:35:24.893 279199 WARNING nova.virt.libvirt.driver [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:35:24 localhost nova_compute[279195]: 2026-02-20 09:35:24.895 279199 DEBUG nova.compute.resource_tracker [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=12515MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:35:24 localhost nova_compute[279195]: 2026-02-20 09:35:24.895 279199 DEBUG oslo_concurrency.lockutils [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:35:24 localhost nova_compute[279195]: 2026-02-20 09:35:24.896 279199 DEBUG oslo_concurrency.lockutils [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:35:24 localhost python3.9[279744]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771580124.373206-3312-189672392469486/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:35:24 localhost nova_compute[279195]: 2026-02-20 09:35:24.973 279199 DEBUG nova.compute.resource_tracker [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:35:24 localhost nova_compute[279195]: 2026-02-20 09:35:24.974 279199 DEBUG nova.compute.resource_tracker [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.022 279199 DEBUG nova.scheduler.client.report [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Refreshing inventories for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.091 279199 DEBUG nova.scheduler.client.report [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Updating ProviderTree inventory for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.091 279199 DEBUG nova.compute.provider_tree [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Updating inventory in ProviderTree for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.109 279199 DEBUG nova.scheduler.client.report [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Refreshing aggregate associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.148 279199 DEBUG nova.scheduler.client.report [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Refreshing trait associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, traits: COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_MMX,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.165 279199 DEBUG oslo_concurrency.processutils [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.604 279199 DEBUG oslo_concurrency.processutils [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.610 279199 DEBUG nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Feb 20 04:35:25 localhost nova_compute[279195]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.610 279199 INFO nova.virt.libvirt.host [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] kernel doesn't support AMD SEV#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.612 279199 DEBUG nova.compute.provider_tree [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.612 279199 DEBUG nova.virt.libvirt.driver [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.642 279199 DEBUG nova.scheduler.client.report [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.676 279199 DEBUG nova.compute.resource_tracker [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.677 279199 DEBUG oslo_concurrency.lockutils [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.677 279199 DEBUG nova.service [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.703 279199 DEBUG nova.service [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Feb 20 04:35:25 localhost nova_compute[279195]: 2026-02-20 09:35:25.704 279199 DEBUG nova.servicegroup.drivers.db [None req-ca223cf1-0d32-476d-9e71-539102afd5b1 - - - - - -] DB_Driver: join new ServiceGroup member np0005625202.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Feb 20 04:35:25 localhost python3.9[279821]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 20 04:35:27 localhost python3.9[279931]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml Feb 20 04:35:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:35:27 localhost podman[279949]: 2026-02-20 09:35:27.970856022 +0000 UTC m=+0.067542432 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:35:27 localhost podman[279949]: 2026-02-20 09:35:27.979774513 +0000 UTC m=+0.076460923 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:35:27 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:35:28 localhost openstack_network_exporter[243776]: ERROR 09:35:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:35:28 localhost openstack_network_exporter[243776]: Feb 20 04:35:28 localhost openstack_network_exporter[243776]: ERROR 09:35:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:35:28 localhost openstack_network_exporter[243776]: Feb 20 04:35:29 localhost python3.9[280064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Feb 20 04:35:29 localhost python3.9[280154]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771580128.498574-3435-95411824704468/.source.yaml _original_basename=.5smpkku0 follow=False checksum=1398ce19331de48b62372cc81e1a3aaab78c97b5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:35:30 localhost python3.9[280262]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:35:31 localhost python3.9[280370]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:35:33 localhost python3.9[280478]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Feb 20 04:35:34 localhost python3.9[280588]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Feb 20 04:35:34 localhost systemd-journald[48906]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 106.3 (354 of 333 items), suggesting rotation. Feb 20 04:35:34 localhost systemd-journald[48906]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 20 04:35:34 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 04:35:34 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 04:35:35 localhost python3.9[280722]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Feb 20 04:35:35 localhost systemd[1]: Stopping nova_compute container... Feb 20 04:35:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60589 DF PROTO=TCP SPT=49688 DPT=9102 SEQ=2667105016 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0B424D0000000001030307) Feb 20 04:35:38 localhost nova_compute[279195]: 2026-02-20 09:35:38.919 279199 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Feb 20 04:35:38 localhost nova_compute[279195]: 2026-02-20 09:35:38.922 279199 DEBUG oslo_concurrency.lockutils [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:35:38 localhost nova_compute[279195]: 2026-02-20 09:35:38.922 279199 DEBUG oslo_concurrency.lockutils [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:35:38 localhost nova_compute[279195]: 2026-02-20 09:35:38.922 279199 DEBUG oslo_concurrency.lockutils [None req-d7939d9a-f340-4b68-971c-4e392620a5dd - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:35:39 localhost podman[280740]: 2026-02-20 09:35:39.195569256 +0000 UTC m=+0.084729754 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 20 04:35:39 localhost podman[280740]: 2026-02-20 09:35:39.207320723 +0000 UTC m=+0.096481171 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute) Feb 20 04:35:39 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:35:39 localhost journal[229026]: End of file while reading data: Input/output error Feb 20 04:35:39 localhost systemd[1]: libpod-2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e.scope: Deactivated successfully. Feb 20 04:35:39 localhost systemd[1]: libpod-2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e.scope: Consumed 3.758s CPU time. Feb 20 04:35:39 localhost podman[280726]: 2026-02-20 09:35:39.295447669 +0000 UTC m=+3.386156183 container died 2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=nova_compute, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:35:39 localhost systemd[1]: tmp-crun.PTGcXn.mount: Deactivated successfully. Feb 20 04:35:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e-userdata-shm.mount: Deactivated successfully. Feb 20 04:35:39 localhost podman[280726]: 2026-02-20 09:35:39.371446787 +0000 UTC m=+3.462155251 container cleanup 2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Feb 20 04:35:39 localhost podman[280726]: nova_compute Feb 20 04:35:39 localhost podman[280760]: 2026-02-20 09:35:39.383992556 +0000 UTC m=+0.086489543 container cleanup 2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, config_id=nova_compute, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible) Feb 20 04:35:39 localhost systemd[1]: libpod-conmon-2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e.scope: Deactivated successfully. Feb 20 04:35:39 localhost podman[280787]: error opening file `/run/crun/2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e/status`: No such file or directory Feb 20 04:35:39 localhost podman[280775]: 2026-02-20 09:35:39.484481154 +0000 UTC m=+0.072627358 container cleanup 2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=nova_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}) Feb 20 04:35:39 localhost podman[280775]: nova_compute Feb 20 04:35:39 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Feb 20 04:35:39 localhost systemd[1]: Stopped nova_compute container. Feb 20 04:35:39 localhost systemd[1]: Starting nova_compute container... Feb 20 04:35:39 localhost systemd[1]: Started libcrun container. Feb 20 04:35:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e71aef76232b9187f1b6c0522426897e147f31bbc545837c710df6c88ed985/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e71aef76232b9187f1b6c0522426897e147f31bbc545837c710df6c88ed985/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e71aef76232b9187f1b6c0522426897e147f31bbc545837c710df6c88ed985/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e71aef76232b9187f1b6c0522426897e147f31bbc545837c710df6c88ed985/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e71aef76232b9187f1b6c0522426897e147f31bbc545837c710df6c88ed985/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:39 localhost podman[280789]: 2026-02-20 09:35:39.623894822 +0000 UTC m=+0.111851876 container init 2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:35:39 localhost podman[280789]: 2026-02-20 09:35:39.633712947 +0000 UTC m=+0.121670001 container start 2644442d638858330444a7f23905e4b2511f84851b204e4d490e757f3bec977e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:35:39 localhost podman[280789]: nova_compute Feb 20 04:35:39 localhost nova_compute[280804]: + sudo -E kolla_set_configs Feb 20 04:35:39 localhost systemd[1]: Started nova_compute container. Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Validating config file Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Copying service configuration files Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Deleting /etc/nova/nova.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /etc/nova/nova.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Copying /var/lib/kolla/config_files/src/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Deleting /etc/ceph Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Creating directory /etc/ceph Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /etc/ceph Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Deleting /usr/sbin/iscsiadm Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Writing out command to execute Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:35:39 localhost nova_compute[280804]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Feb 20 04:35:39 localhost nova_compute[280804]: ++ cat /run_command Feb 20 04:35:39 localhost nova_compute[280804]: + CMD=nova-compute Feb 20 04:35:39 localhost nova_compute[280804]: + ARGS= Feb 20 04:35:39 localhost nova_compute[280804]: + sudo kolla_copy_cacerts Feb 20 04:35:39 localhost nova_compute[280804]: + [[ ! -n '' ]] Feb 20 04:35:39 localhost nova_compute[280804]: + . kolla_extend_start Feb 20 04:35:39 localhost nova_compute[280804]: Running command: 'nova-compute' Feb 20 04:35:39 localhost nova_compute[280804]: + echo 'Running command: '\''nova-compute'\''' Feb 20 04:35:39 localhost nova_compute[280804]: + umask 0022 Feb 20 04:35:39 localhost nova_compute[280804]: + exec nova-compute Feb 20 04:35:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60590 DF PROTO=TCP SPT=49688 DPT=9102 SEQ=2667105016 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0B464D0000000001030307) Feb 20 04:35:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17705 DF PROTO=TCP SPT=52694 DPT=9102 SEQ=307837550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0B4A0E0000000001030307) Feb 20 04:35:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:35:41 localhost nova_compute[280804]: 2026-02-20 09:35:41.362 280808 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 20 04:35:41 localhost nova_compute[280804]: 2026-02-20 09:35:41.363 280808 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 20 04:35:41 localhost nova_compute[280804]: 2026-02-20 09:35:41.363 280808 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Feb 20 04:35:41 localhost nova_compute[280804]: 2026-02-20 09:35:41.363 280808 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Feb 20 04:35:41 localhost podman[280890]: 2026-02-20 09:35:41.45878682 +0000 UTC m=+0.095034612 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, vcs-type=git, managed_by=edpm_ansible, version=9.7, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 20 04:35:41 localhost podman[280890]: 2026-02-20 09:35:41.471681488 +0000 UTC m=+0.107929300 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 20 04:35:41 localhost nova_compute[280804]: 2026-02-20 09:35:41.478 280808 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:35:41 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:35:41 localhost nova_compute[280804]: 2026-02-20 09:35:41.502 280808 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:35:41 localhost nova_compute[280804]: 2026-02-20 09:35:41.502 280808 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Feb 20 04:35:41 localhost python3.9[280949]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Feb 20 04:35:41 localhost nova_compute[280804]: 2026-02-20 09:35:41.882 280808 INFO nova.virt.driver [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Feb 20 04:35:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:f4:f9:b0 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60591 DF PROTO=TCP SPT=49688 DPT=9102 SEQ=2667105016 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A3B0B4E4D0000000001030307) Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.003 280808 INFO nova.compute.provider_config [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Feb 20 04:35:42 localhost systemd[1]: Started libpod-conmon-29f11e275ca2f653c4911d7a399e43eccafc082e16d52ec9d06a475965db1dea.scope. Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.011 280808 DEBUG oslo_concurrency.lockutils [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.012 280808 DEBUG oslo_concurrency.lockutils [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.012 280808 DEBUG oslo_concurrency.lockutils [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.012 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.012 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.013 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.013 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.013 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.013 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.013 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.013 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.013 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.013 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.014 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.014 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.014 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.014 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.014 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.014 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.015 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.015 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.015 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.015 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] console_host = np0005625202.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.015 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.015 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.015 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.015 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.016 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.016 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.016 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.016 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.016 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.016 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.016 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.017 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.017 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.017 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.017 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.017 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.017 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.017 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.018 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] host = np0005625202.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.018 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.018 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.018 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.018 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.018 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.019 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.019 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.019 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.019 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.019 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.019 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.019 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.020 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.020 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.020 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.020 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.020 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.020 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.020 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.020 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.021 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.021 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.021 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.021 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.021 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.021 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.021 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.021 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.022 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.022 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.022 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.022 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.022 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.022 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.023 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.023 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.023 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.023 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.023 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.023 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.023 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.024 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.024 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.024 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.024 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.024 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.024 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.024 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.025 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.025 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.025 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.025 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.025 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.025 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.025 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.026 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.026 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.026 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.026 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.026 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.026 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.026 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.026 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.027 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.027 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.027 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.027 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.027 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.027 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.027 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.028 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.028 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.028 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.028 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.028 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.028 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.028 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.028 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.029 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.029 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.029 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.029 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.029 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.029 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.029 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.030 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.030 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.030 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.030 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.030 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.030 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.030 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.030 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.031 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.031 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.031 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.031 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.031 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.031 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.031 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.032 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.032 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.032 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.032 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.032 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.032 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.032 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.032 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.033 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.033 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.033 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.033 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.033 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.033 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.033 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.034 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.034 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.034 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.034 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.034 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.034 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.034 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.035 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.035 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.035 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.035 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.035 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.035 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.035 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.035 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.036 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.036 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.036 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.036 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.036 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.036 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.036 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.037 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.037 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.037 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.037 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.037 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.037 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.037 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.038 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.038 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.038 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.038 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.038 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.038 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.038 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost systemd[1]: Started libcrun container. Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.038 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.039 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.039 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.039 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.039 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.039 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.039 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.039 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.040 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.040 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.040 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.040 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.040 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.040 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.040 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.040 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.041 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.041 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.041 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.041 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.041 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.041 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.041 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.042 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.042 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.042 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.042 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.042 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.042 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.042 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.043 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.043 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.043 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.os_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.043 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.043 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.043 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.043 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.043 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.044 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.044 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.044 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.044 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.044 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.044 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.044 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.045 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.045 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.045 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.045 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.045 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.045 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.045 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.046 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.046 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.046 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.046 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.046 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.046 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.046 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.046 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.047 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.047 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f390caecfa51997bf831f419da2a1bdf21d0cbd950e5f16193a8e1e43e37e62d/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f390caecfa51997bf831f419da2a1bdf21d0cbd950e5f16193a8e1e43e37e62d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.047 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.047 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.047 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.047 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.047 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f390caecfa51997bf831f419da2a1bdf21d0cbd950e5f16193a8e1e43e37e62d/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.048 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.048 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.048 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.048 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.048 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.048 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.048 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.048 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.049 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.049 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.049 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.049 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.049 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.049 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.050 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.050 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.050 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.050 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.050 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.050 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.050 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.050 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.051 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.051 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.051 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.051 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.051 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.051 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.051 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.052 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.052 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.052 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.052 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.052 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.052 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.052 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.053 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.053 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.053 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.053 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.053 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.053 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.053 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.054 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.054 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.054 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.054 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.054 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.054 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.054 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.055 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.055 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.055 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.055 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.055 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.055 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.056 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.056 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.056 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.056 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.056 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.056 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.056 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.057 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.057 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.057 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost podman[280975]: 2026-02-20 09:35:42.056751059 +0000 UTC m=+0.150071507 container init 29f11e275ca2f653c4911d7a399e43eccafc082e16d52ec9d06a475965db1dea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, config_id=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.057 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.057 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.058 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.058 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.058 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.058 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.058 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.058 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.058 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.059 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.059 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.059 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.059 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.059 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.060 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.060 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.060 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.060 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.060 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.060 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.060 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.061 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.061 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.061 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.061 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.061 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.062 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.062 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.062 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.062 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.062 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.062 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.063 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.063 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.063 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.063 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.063 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.063 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.063 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.064 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.064 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.064 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.064 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.064 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.064 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.065 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.065 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.065 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.065 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.065 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.065 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.065 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.066 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.066 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.066 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.066 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.066 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.066 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.066 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.067 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.067 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.067 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.067 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.067 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.067 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.068 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.068 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.068 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.068 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.068 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.068 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.068 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.barbican_region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.069 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.069 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.069 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.069 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.069 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.070 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.070 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost podman[280975]: 2026-02-20 09:35:42.070293463 +0000 UTC m=+0.163613911 container start 29f11e275ca2f653c4911d7a399e43eccafc082e16d52ec9d06a475965db1dea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=nova_compute_init, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.070 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.070 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.070 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.070 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.071 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.071 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.071 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.071 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.071 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.071 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.071 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.072 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.072 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.072 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.072 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.072 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.072 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.072 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.073 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.073 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.073 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.073 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.073 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.073 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.073 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.073 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.074 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.074 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.074 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.074 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.074 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.074 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.074 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.075 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.075 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.075 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.075 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost python3.9[280949]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.075 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.075 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.075 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.076 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.076 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.076 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.076 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.076 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.076 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.076 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.077 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.077 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.077 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.077 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.077 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.077 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.077 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.078 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.078 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.078 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.078 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.078 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.078 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.078 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.079 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.079 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.079 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.079 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.079 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.079 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.079 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.080 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.080 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.080 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.080 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.080 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.080 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.080 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.081 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.081 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.081 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.081 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.081 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.081 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.081 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.082 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.082 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.082 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.082 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.082 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.082 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.082 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.083 280808 WARNING oslo_config.cfg [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Feb 20 04:35:42 localhost nova_compute[280804]: live_migration_uri is deprecated for removal in favor of two other options that Feb 20 04:35:42 localhost nova_compute[280804]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Feb 20 04:35:42 localhost nova_compute[280804]: and ``live_migration_inbound_addr`` respectively. Feb 20 04:35:42 localhost nova_compute[280804]: ). Its value may be silently ignored in the future.#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.083 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.083 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.083 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.083 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.083 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.084 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.084 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.084 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.084 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.084 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.084 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.084 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.085 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.085 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.085 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.085 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.085 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.085 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.085 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.rbd_secret_uuid = a8557ee9-b55d-5519-942c-cf8f6172f1d8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.086 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.086 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.086 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.086 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.086 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.086 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.086 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.087 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.087 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.087 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.087 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.087 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.087 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.088 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.088 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.088 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.088 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.088 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.088 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.088 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.089 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.089 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.089 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.089 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.089 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.089 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.089 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.090 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.090 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.090 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.090 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.090 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.090 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.090 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.091 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.091 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.091 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.091 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.091 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.092 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.092 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.092 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.092 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.092 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.092 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.092 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.093 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.093 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.093 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.093 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.093 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.093 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.093 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.094 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.094 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.094 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.094 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.094 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.094 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.095 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.095 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.095 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.095 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.095 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.095 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.095 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.096 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.096 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.096 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.096 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.096 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.096 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.096 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.097 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.097 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.097 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.097 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.097 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.097 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.097 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.098 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.098 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.098 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.098 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.098 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.098 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.098 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.099 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.099 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.099 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.099 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.099 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.099 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.099 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.100 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.100 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.100 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.100 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.100 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.100 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.100 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.101 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.101 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.101 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.101 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.101 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.101 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.101 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.102 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.102 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.102 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.102 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.102 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.102 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.102 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.102 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.103 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.103 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.103 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.103 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.103 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.103 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.104 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.104 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.104 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.104 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.104 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.104 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.104 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.105 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.105 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.105 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.105 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.105 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.105 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.105 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.106 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.106 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.106 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.106 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.106 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.106 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.106 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.107 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.107 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.107 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.107 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.107 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.107 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.107 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.108 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.108 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.108 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.108 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.108 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.108 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.108 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.109 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.109 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.109 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.109 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.109 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.109 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.109 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.110 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.110 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.110 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.110 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.110 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.110 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.110 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.111 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.111 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.111 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.111 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.111 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.111 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.111 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.112 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.112 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.112 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.112 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.112 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.112 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.112 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.112 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.113 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.113 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.113 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.113 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.113 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.113 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.113 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.114 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.114 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.114 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.114 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.114 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.114 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.114 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.114 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.115 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.115 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.115 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.115 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.115 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.115 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.115 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.115 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.116 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.116 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.116 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.116 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.116 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.116 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.116 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.116 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.117 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.117 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.117 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.117 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.117 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.117 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.117 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.118 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.118 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.118 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.118 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.118 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.118 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.119 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.119 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.119 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.119 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.119 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.119 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.119 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.120 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.120 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.120 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.120 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.120 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.120 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.120 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.121 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.121 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.121 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.121 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.121 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.121 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.121 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.122 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.122 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.122 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.122 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.122 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.122 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.122 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.123 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.123 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.123 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.123 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.123 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.123 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.123 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.124 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.124 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.124 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.124 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.124 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.124 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.124 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.125 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.125 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.125 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.125 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.125 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.125 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.126 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.126 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.126 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.126 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.126 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.126 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.126 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.127 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.127 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.127 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.127 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.127 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.127 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.127 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.128 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.128 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.128 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.128 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.128 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.128 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.128 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.129 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.129 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.129 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.129 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.129 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Applying nova statedir ownership Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.129 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.129 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.130 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.130 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.130 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.130 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.130 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.130 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.130 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.131 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/fc52238ffcbdcb325c6bf3fe6412477fc4bdb6cd9151f39289b74f25e08e0db9 Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/d301d14069645d8c23fee2987984776b3e88a570e1aa96d6cf3e31fa880385fd Feb 20 04:35:42 localhost nova_compute_init[280995]: INFO:nova_statedir:Nova statedir ownership complete Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.131 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.131 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.131 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.131 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.131 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.131 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.132 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.132 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.132 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.132 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.132 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.132 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.132 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.133 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.133 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.133 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.133 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.133 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.133 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.133 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.134 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.134 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.134 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.134 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.134 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.134 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.134 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.135 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.135 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.135 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.135 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.135 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.135 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.135 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.136 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.136 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.136 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.136 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.136 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.136 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.136 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.137 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.137 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.137 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.137 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.137 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.137 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.137 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost systemd[1]: libpod-29f11e275ca2f653c4911d7a399e43eccafc082e16d52ec9d06a475965db1dea.scope: Deactivated successfully. Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.138 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.138 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.138 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.138 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.138 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.138 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.138 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.139 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.139 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.139 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.139 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.139 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.139 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.139 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.140 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.140 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.140 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.140 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.140 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.140 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.140 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.141 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.141 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.141 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.141 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.141 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.141 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.142 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.142 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.142 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.142 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.142 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.142 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.142 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.143 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.143 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.143 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.143 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.143 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.143 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost podman[280996]: 2026-02-20 09:35:42.143733353 +0000 UTC m=+0.054119690 container died 29f11e275ca2f653c4911d7a399e43eccafc082e16d52ec9d06a475965db1dea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, config_id=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.143 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.143 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.144 280808 DEBUG oslo_service.service [None req-e0d4ee1a-75cb-4970-81fc-5a9bd2707c3e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.144 280808 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.160 280808 INFO nova.virt.node [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Determined node identity 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from /var/lib/nova/compute_id#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.160 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.161 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.161 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.161 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.172 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.175 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.175 280808 INFO nova.virt.libvirt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Connection event '1' reason 'None'#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.182 280808 INFO nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Libvirt host capabilities Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 61530aa3-6295-40fa-9f19-edfd227b2bca Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: x86_64 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v4 Feb 20 04:35:42 localhost nova_compute[280804]: AMD Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: tcp Feb 20 04:35:42 localhost nova_compute[280804]: rdma Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 16116612 Feb 20 04:35:42 localhost nova_compute[280804]: 4029153 Feb 20 04:35:42 localhost nova_compute[280804]: 0 Feb 20 04:35:42 localhost nova_compute[280804]: 0 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: selinux Feb 20 04:35:42 localhost nova_compute[280804]: 0 Feb 20 04:35:42 localhost nova_compute[280804]: system_u:system_r:svirt_t:s0 Feb 20 04:35:42 localhost nova_compute[280804]: system_u:system_r:svirt_tcg_t:s0 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: dac Feb 20 04:35:42 localhost nova_compute[280804]: 0 Feb 20 04:35:42 localhost nova_compute[280804]: +107:+107 Feb 20 04:35:42 localhost nova_compute[280804]: +107:+107 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: hvm Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 32 Feb 20 04:35:42 localhost nova_compute[280804]: /usr/libexec/qemu-kvm Feb 20 04:35:42 localhost nova_compute[280804]: pc-i440fx-rhel7.6.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel9.8.0 Feb 20 04:35:42 localhost nova_compute[280804]: q35 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel9.6.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.6.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel9.4.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.5.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.3.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel7.6.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.4.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel9.2.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.2.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel9.0.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.0.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.1.0 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: hvm Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 64 Feb 20 04:35:42 localhost nova_compute[280804]: /usr/libexec/qemu-kvm Feb 20 04:35:42 localhost nova_compute[280804]: pc-i440fx-rhel7.6.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel9.8.0 Feb 20 04:35:42 localhost nova_compute[280804]: q35 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel9.6.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.6.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel9.4.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.5.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.3.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel7.6.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.4.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel9.2.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.2.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel9.0.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.0.0 Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel8.1.0 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: #033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.190 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.191 280808 DEBUG nova.virt.libvirt.volume.mount [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.199 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: /usr/libexec/qemu-kvm Feb 20 04:35:42 localhost nova_compute[280804]: kvm Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel9.8.0 Feb 20 04:35:42 localhost nova_compute[280804]: i686 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: rom Feb 20 04:35:42 localhost nova_compute[280804]: pflash Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: yes Feb 20 04:35:42 localhost nova_compute[280804]: no Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: no Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: on Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: on Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome Feb 20 04:35:42 localhost nova_compute[280804]: AMD Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 486 Feb 20 04:35:42 localhost nova_compute[280804]: 486-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: ClearwaterForest Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: ClearwaterForest-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Conroe Feb 20 04:35:42 localhost nova_compute[280804]: Conroe-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Cooperlake Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cooperlake-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cooperlake-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Dhyana Feb 20 04:35:42 localhost nova_compute[280804]: Dhyana-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Dhyana-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Genoa Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Genoa-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Genoa-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-IBPB Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v4 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v5 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Turin Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Turin-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v1 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v2 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v6 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v7 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: IvyBridge Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: IvyBridge-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: IvyBridge-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: IvyBridge-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: KnightsMill Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: KnightsMill-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Nehalem Feb 20 04:35:42 localhost nova_compute[280804]: Nehalem-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Nehalem-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Nehalem-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G1 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G1-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G2 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G2-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G3 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G3-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G4-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G5-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Penryn Feb 20 04:35:42 localhost nova_compute[280804]: Penryn-v1 Feb 20 04:35:42 localhost nova_compute[280804]: SandyBridge Feb 20 04:35:42 localhost nova_compute[280804]: SandyBridge-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: SandyBridge-v1 Feb 20 04:35:42 localhost nova_compute[280804]: SandyBridge-v2 Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SierraForest Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SierraForest-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SierraForest-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SierraForest-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost podman[281007]: 2026-02-20 09:35:42.239138785 +0000 UTC m=+0.085067135 container cleanup 29f11e275ca2f653c4911d7a399e43eccafc082e16d52ec9d06a475965db1dea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '832c02321984b9faf87ea1e4ce5a0ef3adf9381e48f5c448f10d738061843ddd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=nova_compute_init, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost systemd[1]: libpod-conmon-29f11e275ca2f653c4911d7a399e43eccafc082e16d52ec9d06a475965db1dea.scope: Deactivated successfully. Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Westmere Feb 20 04:35:42 localhost nova_compute[280804]: Westmere-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Westmere-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Westmere-v2 Feb 20 04:35:42 localhost nova_compute[280804]: athlon Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: athlon-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: core2duo Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: core2duo-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: coreduo Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: coreduo-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: kvm32 Feb 20 04:35:42 localhost nova_compute[280804]: kvm32-v1 Feb 20 04:35:42 localhost nova_compute[280804]: kvm64 Feb 20 04:35:42 localhost nova_compute[280804]: kvm64-v1 Feb 20 04:35:42 localhost nova_compute[280804]: n270 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: n270-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: pentium Feb 20 04:35:42 localhost nova_compute[280804]: pentium-v1 Feb 20 04:35:42 localhost nova_compute[280804]: pentium2 Feb 20 04:35:42 localhost nova_compute[280804]: pentium2-v1 Feb 20 04:35:42 localhost nova_compute[280804]: pentium3 Feb 20 04:35:42 localhost nova_compute[280804]: pentium3-v1 Feb 20 04:35:42 localhost nova_compute[280804]: phenom Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: phenom-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: qemu32 Feb 20 04:35:42 localhost nova_compute[280804]: qemu32-v1 Feb 20 04:35:42 localhost nova_compute[280804]: qemu64 Feb 20 04:35:42 localhost nova_compute[280804]: qemu64-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: file Feb 20 04:35:42 localhost nova_compute[280804]: anonymous Feb 20 04:35:42 localhost nova_compute[280804]: memfd Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: disk Feb 20 04:35:42 localhost nova_compute[280804]: cdrom Feb 20 04:35:42 localhost nova_compute[280804]: floppy Feb 20 04:35:42 localhost nova_compute[280804]: lun Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: fdc Feb 20 04:35:42 localhost nova_compute[280804]: scsi Feb 20 04:35:42 localhost nova_compute[280804]: virtio Feb 20 04:35:42 localhost nova_compute[280804]: usb Feb 20 04:35:42 localhost nova_compute[280804]: sata Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: virtio Feb 20 04:35:42 localhost nova_compute[280804]: virtio-transitional Feb 20 04:35:42 localhost nova_compute[280804]: virtio-non-transitional Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: vnc Feb 20 04:35:42 localhost nova_compute[280804]: egl-headless Feb 20 04:35:42 localhost nova_compute[280804]: dbus Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: subsystem Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: default Feb 20 04:35:42 localhost nova_compute[280804]: mandatory Feb 20 04:35:42 localhost nova_compute[280804]: requisite Feb 20 04:35:42 localhost nova_compute[280804]: optional Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: usb Feb 20 04:35:42 localhost nova_compute[280804]: pci Feb 20 04:35:42 localhost nova_compute[280804]: scsi Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: virtio Feb 20 04:35:42 localhost nova_compute[280804]: virtio-transitional Feb 20 04:35:42 localhost nova_compute[280804]: virtio-non-transitional Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: random Feb 20 04:35:42 localhost nova_compute[280804]: egd Feb 20 04:35:42 localhost nova_compute[280804]: builtin Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: path Feb 20 04:35:42 localhost nova_compute[280804]: handle Feb 20 04:35:42 localhost nova_compute[280804]: virtiofs Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: tpm-tis Feb 20 04:35:42 localhost nova_compute[280804]: tpm-crb Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: emulator Feb 20 04:35:42 localhost nova_compute[280804]: external Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 2.0 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: usb Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: pty Feb 20 04:35:42 localhost nova_compute[280804]: unix Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: qemu Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: builtin Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: default Feb 20 04:35:42 localhost nova_compute[280804]: passt Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: isa Feb 20 04:35:42 localhost nova_compute[280804]: hyperv Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: null Feb 20 04:35:42 localhost nova_compute[280804]: vc Feb 20 04:35:42 localhost nova_compute[280804]: pty Feb 20 04:35:42 localhost nova_compute[280804]: dev Feb 20 04:35:42 localhost nova_compute[280804]: file Feb 20 04:35:42 localhost nova_compute[280804]: pipe Feb 20 04:35:42 localhost nova_compute[280804]: stdio Feb 20 04:35:42 localhost nova_compute[280804]: udp Feb 20 04:35:42 localhost nova_compute[280804]: tcp Feb 20 04:35:42 localhost nova_compute[280804]: unix Feb 20 04:35:42 localhost nova_compute[280804]: qemu-vdagent Feb 20 04:35:42 localhost nova_compute[280804]: dbus Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: relaxed Feb 20 04:35:42 localhost nova_compute[280804]: vapic Feb 20 04:35:42 localhost nova_compute[280804]: spinlocks Feb 20 04:35:42 localhost nova_compute[280804]: vpindex Feb 20 04:35:42 localhost nova_compute[280804]: runtime Feb 20 04:35:42 localhost nova_compute[280804]: synic Feb 20 04:35:42 localhost nova_compute[280804]: stimer Feb 20 04:35:42 localhost nova_compute[280804]: reset Feb 20 04:35:42 localhost nova_compute[280804]: vendor_id Feb 20 04:35:42 localhost nova_compute[280804]: frequencies Feb 20 04:35:42 localhost nova_compute[280804]: reenlightenment Feb 20 04:35:42 localhost nova_compute[280804]: tlbflush Feb 20 04:35:42 localhost nova_compute[280804]: ipi Feb 20 04:35:42 localhost nova_compute[280804]: avic Feb 20 04:35:42 localhost nova_compute[280804]: emsr_bitmap Feb 20 04:35:42 localhost nova_compute[280804]: xmm_input Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 4095 Feb 20 04:35:42 localhost nova_compute[280804]: on Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: Linux KVM Hv Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.208 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: /usr/libexec/qemu-kvm Feb 20 04:35:42 localhost nova_compute[280804]: kvm Feb 20 04:35:42 localhost nova_compute[280804]: pc-i440fx-rhel7.6.0 Feb 20 04:35:42 localhost nova_compute[280804]: i686 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: rom Feb 20 04:35:42 localhost nova_compute[280804]: pflash Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: yes Feb 20 04:35:42 localhost nova_compute[280804]: no Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: no Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: on Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: on Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome Feb 20 04:35:42 localhost nova_compute[280804]: AMD Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 486 Feb 20 04:35:42 localhost nova_compute[280804]: 486-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: ClearwaterForest Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: ClearwaterForest-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Conroe Feb 20 04:35:42 localhost nova_compute[280804]: Conroe-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Cooperlake Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cooperlake-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cooperlake-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Dhyana Feb 20 04:35:42 localhost nova_compute[280804]: Dhyana-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Dhyana-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Genoa Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Genoa-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Genoa-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-IBPB Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v4 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v5 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Turin Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Turin-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v1 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v2 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v6 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v7 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: IvyBridge Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: IvyBridge-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: IvyBridge-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: IvyBridge-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: KnightsMill Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: KnightsMill-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Nehalem Feb 20 04:35:42 localhost nova_compute[280804]: Nehalem-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Nehalem-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Nehalem-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G1 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G1-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G2 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G2-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G3 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G3-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G4-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G5-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Penryn Feb 20 04:35:42 localhost nova_compute[280804]: Penryn-v1 Feb 20 04:35:42 localhost nova_compute[280804]: SandyBridge Feb 20 04:35:42 localhost nova_compute[280804]: SandyBridge-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: SandyBridge-v1 Feb 20 04:35:42 localhost nova_compute[280804]: SandyBridge-v2 Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SierraForest Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SierraForest-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SierraForest-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SierraForest-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Westmere Feb 20 04:35:42 localhost nova_compute[280804]: Westmere-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Westmere-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Westmere-v2 Feb 20 04:35:42 localhost nova_compute[280804]: athlon Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: athlon-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: core2duo Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: core2duo-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: coreduo Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: coreduo-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: kvm32 Feb 20 04:35:42 localhost nova_compute[280804]: kvm32-v1 Feb 20 04:35:42 localhost nova_compute[280804]: kvm64 Feb 20 04:35:42 localhost nova_compute[280804]: kvm64-v1 Feb 20 04:35:42 localhost nova_compute[280804]: n270 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: n270-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: pentium Feb 20 04:35:42 localhost nova_compute[280804]: pentium-v1 Feb 20 04:35:42 localhost nova_compute[280804]: pentium2 Feb 20 04:35:42 localhost nova_compute[280804]: pentium2-v1 Feb 20 04:35:42 localhost nova_compute[280804]: pentium3 Feb 20 04:35:42 localhost nova_compute[280804]: pentium3-v1 Feb 20 04:35:42 localhost nova_compute[280804]: phenom Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: phenom-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: qemu32 Feb 20 04:35:42 localhost nova_compute[280804]: qemu32-v1 Feb 20 04:35:42 localhost nova_compute[280804]: qemu64 Feb 20 04:35:42 localhost nova_compute[280804]: qemu64-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: file Feb 20 04:35:42 localhost nova_compute[280804]: anonymous Feb 20 04:35:42 localhost nova_compute[280804]: memfd Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: disk Feb 20 04:35:42 localhost nova_compute[280804]: cdrom Feb 20 04:35:42 localhost nova_compute[280804]: floppy Feb 20 04:35:42 localhost nova_compute[280804]: lun Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: ide Feb 20 04:35:42 localhost nova_compute[280804]: fdc Feb 20 04:35:42 localhost nova_compute[280804]: scsi Feb 20 04:35:42 localhost nova_compute[280804]: virtio Feb 20 04:35:42 localhost nova_compute[280804]: usb Feb 20 04:35:42 localhost nova_compute[280804]: sata Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: virtio Feb 20 04:35:42 localhost nova_compute[280804]: virtio-transitional Feb 20 04:35:42 localhost nova_compute[280804]: virtio-non-transitional Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: vnc Feb 20 04:35:42 localhost nova_compute[280804]: egl-headless Feb 20 04:35:42 localhost nova_compute[280804]: dbus Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: subsystem Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: default Feb 20 04:35:42 localhost nova_compute[280804]: mandatory Feb 20 04:35:42 localhost nova_compute[280804]: requisite Feb 20 04:35:42 localhost nova_compute[280804]: optional Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: usb Feb 20 04:35:42 localhost nova_compute[280804]: pci Feb 20 04:35:42 localhost nova_compute[280804]: scsi Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: virtio Feb 20 04:35:42 localhost nova_compute[280804]: virtio-transitional Feb 20 04:35:42 localhost nova_compute[280804]: virtio-non-transitional Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: random Feb 20 04:35:42 localhost nova_compute[280804]: egd Feb 20 04:35:42 localhost nova_compute[280804]: builtin Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: path Feb 20 04:35:42 localhost nova_compute[280804]: handle Feb 20 04:35:42 localhost nova_compute[280804]: virtiofs Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: tpm-tis Feb 20 04:35:42 localhost nova_compute[280804]: tpm-crb Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: emulator Feb 20 04:35:42 localhost nova_compute[280804]: external Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 2.0 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: usb Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: pty Feb 20 04:35:42 localhost nova_compute[280804]: unix Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: qemu Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: builtin Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: default Feb 20 04:35:42 localhost nova_compute[280804]: passt Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: isa Feb 20 04:35:42 localhost nova_compute[280804]: hyperv Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: null Feb 20 04:35:42 localhost nova_compute[280804]: vc Feb 20 04:35:42 localhost nova_compute[280804]: pty Feb 20 04:35:42 localhost nova_compute[280804]: dev Feb 20 04:35:42 localhost nova_compute[280804]: file Feb 20 04:35:42 localhost nova_compute[280804]: pipe Feb 20 04:35:42 localhost nova_compute[280804]: stdio Feb 20 04:35:42 localhost nova_compute[280804]: udp Feb 20 04:35:42 localhost nova_compute[280804]: tcp Feb 20 04:35:42 localhost nova_compute[280804]: unix Feb 20 04:35:42 localhost nova_compute[280804]: qemu-vdagent Feb 20 04:35:42 localhost nova_compute[280804]: dbus Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: relaxed Feb 20 04:35:42 localhost nova_compute[280804]: vapic Feb 20 04:35:42 localhost nova_compute[280804]: spinlocks Feb 20 04:35:42 localhost nova_compute[280804]: vpindex Feb 20 04:35:42 localhost nova_compute[280804]: runtime Feb 20 04:35:42 localhost nova_compute[280804]: synic Feb 20 04:35:42 localhost nova_compute[280804]: stimer Feb 20 04:35:42 localhost nova_compute[280804]: reset Feb 20 04:35:42 localhost nova_compute[280804]: vendor_id Feb 20 04:35:42 localhost nova_compute[280804]: frequencies Feb 20 04:35:42 localhost nova_compute[280804]: reenlightenment Feb 20 04:35:42 localhost nova_compute[280804]: tlbflush Feb 20 04:35:42 localhost nova_compute[280804]: ipi Feb 20 04:35:42 localhost nova_compute[280804]: avic Feb 20 04:35:42 localhost nova_compute[280804]: emsr_bitmap Feb 20 04:35:42 localhost nova_compute[280804]: xmm_input Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 4095 Feb 20 04:35:42 localhost nova_compute[280804]: on Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: Linux KVM Hv Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.261 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.271 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: /usr/libexec/qemu-kvm Feb 20 04:35:42 localhost nova_compute[280804]: kvm Feb 20 04:35:42 localhost nova_compute[280804]: pc-q35-rhel9.8.0 Feb 20 04:35:42 localhost nova_compute[280804]: x86_64 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: efi Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Feb 20 04:35:42 localhost nova_compute[280804]: /usr/share/edk2/ovmf/OVMF_CODE.fd Feb 20 04:35:42 localhost nova_compute[280804]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Feb 20 04:35:42 localhost nova_compute[280804]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: rom Feb 20 04:35:42 localhost nova_compute[280804]: pflash Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: yes Feb 20 04:35:42 localhost nova_compute[280804]: no Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: yes Feb 20 04:35:42 localhost nova_compute[280804]: no Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: on Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: on Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome Feb 20 04:35:42 localhost nova_compute[280804]: AMD Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 486 Feb 20 04:35:42 localhost nova_compute[280804]: 486-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: ClearwaterForest Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: ClearwaterForest-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Conroe Feb 20 04:35:42 localhost nova_compute[280804]: Conroe-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Cooperlake Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cooperlake-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cooperlake-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Dhyana Feb 20 04:35:42 localhost nova_compute[280804]: Dhyana-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Dhyana-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Genoa Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Genoa-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Genoa-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-IBPB Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v4 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v5 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Turin Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Turin-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v1 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v2 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Haswell-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v6 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Icelake-Server-v7 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: IvyBridge Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: IvyBridge-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: IvyBridge-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: IvyBridge-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: KnightsMill Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: KnightsMill-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Nehalem Feb 20 04:35:42 localhost nova_compute[280804]: Nehalem-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Nehalem-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Nehalem-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G1 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G1-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G2 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G2-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G3 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G3-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G4-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Opteron_G5-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Penryn Feb 20 04:35:42 localhost nova_compute[280804]: Penryn-v1 Feb 20 04:35:42 localhost nova_compute[280804]: SandyBridge Feb 20 04:35:42 localhost nova_compute[280804]: SandyBridge-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: SandyBridge-v1 Feb 20 04:35:42 localhost nova_compute[280804]: SandyBridge-v2 Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SapphireRapids-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SierraForest Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SierraForest-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SierraForest-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: SierraForest-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Client-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Skylake-Server-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Snowridge-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Westmere Feb 20 04:35:42 localhost nova_compute[280804]: Westmere-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Westmere-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Westmere-v2 Feb 20 04:35:42 localhost nova_compute[280804]: athlon Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: athlon-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: core2duo Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: core2duo-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: coreduo Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: coreduo-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: kvm32 Feb 20 04:35:42 localhost nova_compute[280804]: kvm32-v1 Feb 20 04:35:42 localhost nova_compute[280804]: kvm64 Feb 20 04:35:42 localhost nova_compute[280804]: kvm64-v1 Feb 20 04:35:42 localhost nova_compute[280804]: n270 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: n270-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: pentium Feb 20 04:35:42 localhost nova_compute[280804]: pentium-v1 Feb 20 04:35:42 localhost nova_compute[280804]: pentium2 Feb 20 04:35:42 localhost nova_compute[280804]: pentium2-v1 Feb 20 04:35:42 localhost nova_compute[280804]: pentium3 Feb 20 04:35:42 localhost nova_compute[280804]: pentium3-v1 Feb 20 04:35:42 localhost nova_compute[280804]: phenom Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: phenom-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: qemu32 Feb 20 04:35:42 localhost nova_compute[280804]: qemu32-v1 Feb 20 04:35:42 localhost nova_compute[280804]: qemu64 Feb 20 04:35:42 localhost nova_compute[280804]: qemu64-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: file Feb 20 04:35:42 localhost nova_compute[280804]: anonymous Feb 20 04:35:42 localhost nova_compute[280804]: memfd Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: disk Feb 20 04:35:42 localhost nova_compute[280804]: cdrom Feb 20 04:35:42 localhost nova_compute[280804]: floppy Feb 20 04:35:42 localhost nova_compute[280804]: lun Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: fdc Feb 20 04:35:42 localhost nova_compute[280804]: scsi Feb 20 04:35:42 localhost nova_compute[280804]: virtio Feb 20 04:35:42 localhost nova_compute[280804]: usb Feb 20 04:35:42 localhost nova_compute[280804]: sata Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: virtio Feb 20 04:35:42 localhost nova_compute[280804]: virtio-transitional Feb 20 04:35:42 localhost nova_compute[280804]: virtio-non-transitional Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: vnc Feb 20 04:35:42 localhost nova_compute[280804]: egl-headless Feb 20 04:35:42 localhost nova_compute[280804]: dbus Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: subsystem Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: default Feb 20 04:35:42 localhost nova_compute[280804]: mandatory Feb 20 04:35:42 localhost nova_compute[280804]: requisite Feb 20 04:35:42 localhost nova_compute[280804]: optional Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: usb Feb 20 04:35:42 localhost nova_compute[280804]: pci Feb 20 04:35:42 localhost nova_compute[280804]: scsi Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: virtio Feb 20 04:35:42 localhost nova_compute[280804]: virtio-transitional Feb 20 04:35:42 localhost nova_compute[280804]: virtio-non-transitional Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: random Feb 20 04:35:42 localhost nova_compute[280804]: egd Feb 20 04:35:42 localhost nova_compute[280804]: builtin Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: path Feb 20 04:35:42 localhost nova_compute[280804]: handle Feb 20 04:35:42 localhost nova_compute[280804]: virtiofs Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: tpm-tis Feb 20 04:35:42 localhost nova_compute[280804]: tpm-crb Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: emulator Feb 20 04:35:42 localhost nova_compute[280804]: external Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 2.0 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: usb Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: pty Feb 20 04:35:42 localhost nova_compute[280804]: unix Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: qemu Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: builtin Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: default Feb 20 04:35:42 localhost nova_compute[280804]: passt Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: isa Feb 20 04:35:42 localhost nova_compute[280804]: hyperv Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: null Feb 20 04:35:42 localhost nova_compute[280804]: vc Feb 20 04:35:42 localhost nova_compute[280804]: pty Feb 20 04:35:42 localhost nova_compute[280804]: dev Feb 20 04:35:42 localhost nova_compute[280804]: file Feb 20 04:35:42 localhost nova_compute[280804]: pipe Feb 20 04:35:42 localhost nova_compute[280804]: stdio Feb 20 04:35:42 localhost nova_compute[280804]: udp Feb 20 04:35:42 localhost nova_compute[280804]: tcp Feb 20 04:35:42 localhost nova_compute[280804]: unix Feb 20 04:35:42 localhost nova_compute[280804]: qemu-vdagent Feb 20 04:35:42 localhost nova_compute[280804]: dbus Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: relaxed Feb 20 04:35:42 localhost nova_compute[280804]: vapic Feb 20 04:35:42 localhost nova_compute[280804]: spinlocks Feb 20 04:35:42 localhost nova_compute[280804]: vpindex Feb 20 04:35:42 localhost nova_compute[280804]: runtime Feb 20 04:35:42 localhost nova_compute[280804]: synic Feb 20 04:35:42 localhost nova_compute[280804]: stimer Feb 20 04:35:42 localhost nova_compute[280804]: reset Feb 20 04:35:42 localhost nova_compute[280804]: vendor_id Feb 20 04:35:42 localhost nova_compute[280804]: frequencies Feb 20 04:35:42 localhost nova_compute[280804]: reenlightenment Feb 20 04:35:42 localhost nova_compute[280804]: tlbflush Feb 20 04:35:42 localhost nova_compute[280804]: ipi Feb 20 04:35:42 localhost nova_compute[280804]: avic Feb 20 04:35:42 localhost nova_compute[280804]: emsr_bitmap Feb 20 04:35:42 localhost nova_compute[280804]: xmm_input Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 4095 Feb 20 04:35:42 localhost nova_compute[280804]: on Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: Linux KVM Hv Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Feb 20 04:35:42 localhost nova_compute[280804]: 2026-02-20 09:35:42.358 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: /usr/libexec/qemu-kvm Feb 20 04:35:42 localhost nova_compute[280804]: kvm Feb 20 04:35:42 localhost nova_compute[280804]: pc-i440fx-rhel7.6.0 Feb 20 04:35:42 localhost nova_compute[280804]: x86_64 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: /usr/share/OVMF/OVMF_CODE.secboot.fd Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: rom Feb 20 04:35:42 localhost nova_compute[280804]: pflash Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: yes Feb 20 04:35:42 localhost nova_compute[280804]: no Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: no Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: on Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: on Feb 20 04:35:42 localhost nova_compute[280804]: off Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome Feb 20 04:35:42 localhost nova_compute[280804]: AMD Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: 486 Feb 20 04:35:42 localhost nova_compute[280804]: 486-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-noTSX-IBRS Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Broadwell-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-noTSX Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cascadelake-Server-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: ClearwaterForest Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: ClearwaterForest-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Conroe Feb 20 04:35:42 localhost nova_compute[280804]: Conroe-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Cooperlake Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cooperlake-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Cooperlake-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Denverton-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Dhyana Feb 20 04:35:42 localhost nova_compute[280804]: Dhyana-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Dhyana-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Genoa Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Genoa-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Genoa-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-IBPB Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Milan-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v4 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Rome-v5 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Turin Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-Turin-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v1 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v2 Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v3 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v4 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: EPYC-v5 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost systemd[1]: var-lib-containers-storage-overlay-f390caecfa51997bf831f419da2a1bdf21d0cbd950e5f16193a8e1e43e37e62d-merged.mount: Deactivated successfully. Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29f11e275ca2f653c4911d7a399e43eccafc082e16d52ec9d06a475965db1dea-userdata-shm.mount: Deactivated successfully. Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids-v1 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: GraniteRapids-v2 Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:35:42 localhost nova_compute[280804]: Feb 20 04:39:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:39:15 localhost systemd[1]: tmp-crun.9eqILM.mount: Deactivated successfully. Feb 20 04:39:15 localhost podman[282790]: 2026-02-20 09:39:15.458595522 +0000 UTC m=+0.095965143 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Feb 20 04:39:15 localhost podman[282790]: 2026-02-20 09:39:15.495947726 +0000 UTC m=+0.133317377 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:39:15 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:39:15 localhost rsyslogd[759]: imjournal: 1816 messages lost due to rate-limiting (20000 allowed within 600 seconds) Feb 20 04:39:16 localhost podman[241347]: time="2026-02-20T09:39:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:39:16 localhost podman[241347]: @ - - [20/Feb/2026:09:39:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150996 "" "Go-http-client/1.1" Feb 20 04:39:16 localhost podman[241347]: @ - - [20/Feb/2026:09:39:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17285 "" "Go-http-client/1.1" Feb 20 04:39:16 localhost sshd[282809]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:39:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:39:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:39:17 localhost podman[282810]: 2026-02-20 09:39:17.442292801 +0000 UTC m=+0.080551783 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, version=9.7, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., distribution-scope=public, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 04:39:17 localhost podman[282810]: 2026-02-20 09:39:17.481094323 +0000 UTC m=+0.119353335 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, release=1770267347, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9) Feb 20 04:39:17 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:39:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:39:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:39:20 localhost systemd[1]: tmp-crun.lZF5UI.mount: Deactivated successfully. Feb 20 04:39:20 localhost podman[282886]: 2026-02-20 09:39:20.459279081 +0000 UTC m=+0.096048725 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:39:20 localhost podman[282886]: 2026-02-20 09:39:20.495258838 +0000 UTC m=+0.132028672 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 20 04:39:20 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:39:20 localhost podman[282887]: 2026-02-20 09:39:20.546643014 +0000 UTC m=+0.181829567 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Feb 20 04:39:20 localhost podman[282887]: 2026-02-20 09:39:20.551278038 +0000 UTC m=+0.186464571 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Feb 20 04:39:20 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:39:22 localhost sshd[282929]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:39:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:39:27 localhost systemd[1]: tmp-crun.RjiNHN.mount: Deactivated successfully. Feb 20 04:39:27 localhost podman[283021]: 2026-02-20 09:39:27.449285236 +0000 UTC m=+0.088224166 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:39:27 localhost podman[283021]: 2026-02-20 09:39:27.457174155 +0000 UTC m=+0.096113075 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:39:27 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:39:28 localhost openstack_network_exporter[243776]: ERROR 09:39:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:39:28 localhost openstack_network_exporter[243776]: Feb 20 04:39:28 localhost openstack_network_exporter[243776]: ERROR 09:39:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:39:28 localhost openstack_network_exporter[243776]: Feb 20 04:39:30 localhost systemd[1]: session-61.scope: Deactivated successfully. Feb 20 04:39:30 localhost systemd-logind[760]: Session 61 logged out. Waiting for processes to exit. Feb 20 04:39:30 localhost systemd-logind[760]: Removed session 61. Feb 20 04:39:31 localhost sshd[283045]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:39:32 localhost podman[283123]: Feb 20 04:39:32 localhost podman[283123]: 2026-02-20 09:39:32.697915908 +0000 UTC m=+0.065565256 container create 2ef36e5d2b9fdfc83e4095a9361e85a3c83cc801e1b57a624909f28686c91331 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_pasteur, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, ceph=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.openshift.expose-services=) Feb 20 04:39:32 localhost systemd[1]: Started libpod-conmon-2ef36e5d2b9fdfc83e4095a9361e85a3c83cc801e1b57a624909f28686c91331.scope. Feb 20 04:39:32 localhost systemd[1]: Started libcrun container. Feb 20 04:39:32 localhost podman[283123]: 2026-02-20 09:39:32.670682743 +0000 UTC m=+0.038332051 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:39:32 localhost podman[283123]: 2026-02-20 09:39:32.772780179 +0000 UTC m=+0.140429477 container init 2ef36e5d2b9fdfc83e4095a9361e85a3c83cc801e1b57a624909f28686c91331 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_pasteur, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, release=1770267347, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, RELEASE=main, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:39:32 localhost podman[283123]: 2026-02-20 09:39:32.785727303 +0000 UTC m=+0.153376571 container start 2ef36e5d2b9fdfc83e4095a9361e85a3c83cc801e1b57a624909f28686c91331 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_pasteur, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, GIT_BRANCH=main) Feb 20 04:39:32 localhost podman[283123]: 2026-02-20 09:39:32.78602065 +0000 UTC m=+0.153669988 container attach 2ef36e5d2b9fdfc83e4095a9361e85a3c83cc801e1b57a624909f28686c91331 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_pasteur, com.redhat.component=rhceph-container, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-type=git, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, ceph=True, GIT_CLEAN=True) Feb 20 04:39:32 localhost recursing_pasteur[283138]: 167 167 Feb 20 04:39:32 localhost systemd[1]: libpod-2ef36e5d2b9fdfc83e4095a9361e85a3c83cc801e1b57a624909f28686c91331.scope: Deactivated successfully. Feb 20 04:39:32 localhost podman[283123]: 2026-02-20 09:39:32.789827092 +0000 UTC m=+0.157476420 container died 2ef36e5d2b9fdfc83e4095a9361e85a3c83cc801e1b57a624909f28686c91331 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_pasteur, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, release=1770267347, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:39:32 localhost podman[283143]: 2026-02-20 09:39:32.887437438 +0000 UTC m=+0.089307606 container remove 2ef36e5d2b9fdfc83e4095a9361e85a3c83cc801e1b57a624909f28686c91331 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_pasteur, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1770267347, io.buildah.version=1.42.2, version=7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64) Feb 20 04:39:32 localhost systemd[1]: libpod-conmon-2ef36e5d2b9fdfc83e4095a9361e85a3c83cc801e1b57a624909f28686c91331.scope: Deactivated successfully. Feb 20 04:39:32 localhost systemd[1]: Reloading. Feb 20 04:39:33 localhost systemd-rc-local-generator[283186]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:39:33 localhost systemd-sysv-generator[283189]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: var-lib-containers-storage-overlay-fc94cf048bfc9a6b1af2281f674e3250edcdbdbe52cec2515c7b4a600097e9dd-merged.mount: Deactivated successfully. Feb 20 04:39:33 localhost systemd[1]: Reloading. Feb 20 04:39:33 localhost systemd-sysv-generator[283229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:39:33 localhost systemd-rc-local-generator[283226]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:39:33 localhost systemd[1]: Starting Ceph mds.mds.np0005625202.akhmop for a8557ee9-b55d-5519-942c-cf8f6172f1d8... Feb 20 04:39:34 localhost podman[283288]: Feb 20 04:39:34 localhost podman[283288]: 2026-02-20 09:39:34.077027206 +0000 UTC m=+0.085036412 container create f72a43f164c5df8bcf9297d3e9386604c3b3fff8aeab1ea676ae6030f07ac97c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mds-mds-np0005625202-akhmop, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2) Feb 20 04:39:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd51db68b88dc2c948ce4382bf62a2a3c8b2510d3456cb1fd3bbafd714f3eb79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 04:39:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd51db68b88dc2c948ce4382bf62a2a3c8b2510d3456cb1fd3bbafd714f3eb79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 04:39:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd51db68b88dc2c948ce4382bf62a2a3c8b2510d3456cb1fd3bbafd714f3eb79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 04:39:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd51db68b88dc2c948ce4382bf62a2a3c8b2510d3456cb1fd3bbafd714f3eb79/merged/var/lib/ceph/mds/ceph-mds.np0005625202.akhmop supports timestamps until 2038 (0x7fffffff) Feb 20 04:39:34 localhost podman[283288]: 2026-02-20 09:39:34.038733157 +0000 UTC m=+0.046742423 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:39:34 localhost podman[283288]: 2026-02-20 09:39:34.140769432 +0000 UTC m=+0.148778638 container init f72a43f164c5df8bcf9297d3e9386604c3b3fff8aeab1ea676ae6030f07ac97c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mds-mds-np0005625202-akhmop, io.openshift.expose-services=, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, vcs-type=git, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph) Feb 20 04:39:34 localhost podman[283288]: 2026-02-20 09:39:34.148484497 +0000 UTC m=+0.156493703 container start f72a43f164c5df8bcf9297d3e9386604c3b3fff8aeab1ea676ae6030f07ac97c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mds-mds-np0005625202-akhmop, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, name=rhceph, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7) Feb 20 04:39:34 localhost bash[283288]: f72a43f164c5df8bcf9297d3e9386604c3b3fff8aeab1ea676ae6030f07ac97c Feb 20 04:39:34 localhost systemd[1]: Started Ceph mds.mds.np0005625202.akhmop for a8557ee9-b55d-5519-942c-cf8f6172f1d8. Feb 20 04:39:34 localhost ceph-mds[283306]: set uid:gid to 167:167 (ceph:ceph) Feb 20 04:39:34 localhost ceph-mds[283306]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mds, pid 2 Feb 20 04:39:34 localhost ceph-mds[283306]: main not setting numa affinity Feb 20 04:39:34 localhost ceph-mds[283306]: pidfile_write: ignore empty --pid-file Feb 20 04:39:34 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mds-mds-np0005625202-akhmop[283302]: starting mds.mds.np0005625202.akhmop at Feb 20 04:39:34 localhost ceph-mds[283306]: mds.mds.np0005625202.akhmop Updating MDS map to version 9 from mon.0 Feb 20 04:39:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:39:34 localhost podman[283325]: 2026-02-20 09:39:34.447238792 +0000 UTC m=+0.083271005 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:39:34 localhost podman[283325]: 2026-02-20 09:39:34.460814223 +0000 UTC m=+0.096846476 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:39:34 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:39:35 localhost ceph-mds[283306]: mds.mds.np0005625202.akhmop Updating MDS map to version 10 from mon.0 Feb 20 04:39:35 localhost ceph-mds[283306]: mds.mds.np0005625202.akhmop Monitors have assigned me to become a standby. Feb 20 04:39:35 localhost systemd[1]: tmp-crun.ZKmwHO.mount: Deactivated successfully. Feb 20 04:39:35 localhost podman[283470]: 2026-02-20 09:39:35.807571812 +0000 UTC m=+0.088873625 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, name=rhceph, GIT_BRANCH=main, ceph=True, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Feb 20 04:39:35 localhost podman[283470]: 2026-02-20 09:39:35.91879113 +0000 UTC m=+0.200092913 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=1770267347, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, distribution-scope=public, build-date=2026-02-09T10:25:24Z, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=) Feb 20 04:39:41 localhost nova_compute[280804]: 2026-02-20 09:39:41.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:39:41 localhost nova_compute[280804]: 2026-02-20 09:39:41.513 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:39:41 localhost nova_compute[280804]: 2026-02-20 09:39:41.513 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:39:41 localhost nova_compute[280804]: 2026-02-20 09:39:41.532 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:39:42 localhost nova_compute[280804]: 2026-02-20 09:39:42.526 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:39:43 localhost nova_compute[280804]: 2026-02-20 09:39:43.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:39:43 localhost nova_compute[280804]: 2026-02-20 09:39:43.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:39:43 localhost nova_compute[280804]: 2026-02-20 09:39:43.532 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:39:43 localhost nova_compute[280804]: 2026-02-20 09:39:43.533 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:39:43 localhost nova_compute[280804]: 2026-02-20 09:39:43.533 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:39:43 localhost nova_compute[280804]: 2026-02-20 09:39:43.534 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:39:43 localhost nova_compute[280804]: 2026-02-20 09:39:43.534 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:39:43 localhost nova_compute[280804]: 2026-02-20 09:39:43.976 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:39:44 localhost nova_compute[280804]: 2026-02-20 09:39:44.189 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:39:44 localhost nova_compute[280804]: 2026-02-20 09:39:44.191 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=12469MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:39:44 localhost nova_compute[280804]: 2026-02-20 09:39:44.192 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:39:44 localhost nova_compute[280804]: 2026-02-20 09:39:44.192 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:39:44 localhost nova_compute[280804]: 2026-02-20 09:39:44.245 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:39:44 localhost nova_compute[280804]: 2026-02-20 09:39:44.246 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:39:44 localhost nova_compute[280804]: 2026-02-20 09:39:44.266 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:39:44 localhost sshd[283699]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:39:44 localhost nova_compute[280804]: 2026-02-20 09:39:44.715 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:39:44 localhost nova_compute[280804]: 2026-02-20 09:39:44.721 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:39:44 localhost nova_compute[280804]: 2026-02-20 09:39:44.734 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:39:44 localhost nova_compute[280804]: 2026-02-20 09:39:44.735 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:39:44 localhost nova_compute[280804]: 2026-02-20 09:39:44.735 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:39:45 localhost nova_compute[280804]: 2026-02-20 09:39:45.735 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:39:45 localhost nova_compute[280804]: 2026-02-20 09:39:45.736 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:39:45 localhost nova_compute[280804]: 2026-02-20 09:39:45.737 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:39:45 localhost nova_compute[280804]: 2026-02-20 09:39:45.737 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:39:45 localhost nova_compute[280804]: 2026-02-20 09:39:45.738 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:39:46 localhost podman[241347]: time="2026-02-20T09:39:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:39:46 localhost podman[241347]: @ - - [20/Feb/2026:09:39:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153201 "" "Go-http-client/1.1" Feb 20 04:39:46 localhost podman[241347]: @ - - [20/Feb/2026:09:39:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17763 "" "Go-http-client/1.1" Feb 20 04:39:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:39:46 localhost podman[283703]: 2026-02-20 09:39:46.447932963 +0000 UTC m=+0.084210040 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true) Feb 20 04:39:46 localhost podman[283703]: 2026-02-20 09:39:46.458993248 +0000 UTC m=+0.095270345 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Feb 20 04:39:46 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:39:46 localhost nova_compute[280804]: 2026-02-20 09:39:46.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:39:46 localhost nova_compute[280804]: 2026-02-20 09:39:46.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:39:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5111 writes, 22K keys, 5111 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5111 writes, 672 syncs, 7.61 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 38 writes, 74 keys, 38 commit groups, 1.0 writes per commit group, ingest: 0.04 MB, 0.00 MB/s#012Interval WAL: 38 writes, 19 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 04:39:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:39:48 localhost podman[283722]: 2026-02-20 09:39:48.441506705 +0000 UTC m=+0.080742629 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, name=ubi9/ubi-minimal, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 20 04:39:48 localhost podman[283722]: 2026-02-20 09:39:48.460976682 +0000 UTC m=+0.100212556 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9/ubi-minimal, version=9.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:39:48 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:39:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:39:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:39:51 localhost podman[283741]: 2026-02-20 09:39:51.425100566 +0000 UTC m=+0.063939252 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 20 04:39:51 localhost podman[283741]: 2026-02-20 09:39:51.429744209 +0000 UTC m=+0.068582905 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 04:39:51 localhost systemd[1]: tmp-crun.nGLdi2.mount: Deactivated successfully. Feb 20 04:39:51 localhost podman[283740]: 2026-02-20 09:39:51.450171383 +0000 UTC m=+0.089042850 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:39:51 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:39:51 localhost podman[283740]: 2026-02-20 09:39:51.514996167 +0000 UTC m=+0.153867604 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Feb 20 04:39:51 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:39:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5619 writes, 24K keys, 5619 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5619 writes, 793 syncs, 7.09 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 106 writes, 354 keys, 106 commit groups, 1.0 writes per commit group, ingest: 0.55 MB, 0.00 MB/s#012Interval WAL: 106 writes, 43 syncs, 2.47 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 04:39:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:39:57 localhost podman[283782]: 2026-02-20 09:39:57.993139829 +0000 UTC m=+0.085607327 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:39:58 localhost podman[283782]: 2026-02-20 09:39:58.004828941 +0000 UTC m=+0.097296449 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:39:58 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:39:58 localhost openstack_network_exporter[243776]: ERROR 09:39:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:39:58 localhost openstack_network_exporter[243776]: Feb 20 04:39:58 localhost openstack_network_exporter[243776]: ERROR 09:39:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:39:58 localhost openstack_network_exporter[243776]: Feb 20 04:40:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:40:05 localhost podman[283805]: 2026-02-20 09:40:05.432060565 +0000 UTC m=+0.073471455 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:40:05 localhost podman[283805]: 2026-02-20 09:40:05.469756317 +0000 UTC m=+0.111167177 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:40:05 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:40:05 localhost sshd[283828]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:40:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:40:05.906 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:40:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:40:05.906 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:40:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:40:05.907 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:40:08 localhost sshd[283830]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:40:08 localhost systemd[1]: session-62.scope: Deactivated successfully. Feb 20 04:40:08 localhost systemd[1]: session-62.scope: Consumed 1.279s CPU time. Feb 20 04:40:08 localhost systemd-logind[760]: Session 62 logged out. Waiting for processes to exit. Feb 20 04:40:08 localhost systemd-logind[760]: Removed session 62. Feb 20 04:40:16 localhost podman[241347]: time="2026-02-20T09:40:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:40:16 localhost podman[241347]: @ - - [20/Feb/2026:09:40:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153201 "" "Go-http-client/1.1" Feb 20 04:40:16 localhost podman[241347]: @ - - [20/Feb/2026:09:40:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17765 "" "Go-http-client/1.1" Feb 20 04:40:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:40:17 localhost podman[283850]: 2026-02-20 09:40:17.44349083 +0000 UTC m=+0.081888089 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 20 04:40:17 localhost podman[283850]: 2026-02-20 09:40:17.458284234 +0000 UTC m=+0.096681423 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:40:17 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:40:18 localhost systemd[1]: Stopping User Manager for UID 1003... Feb 20 04:40:18 localhost systemd[282475]: Activating special unit Exit the Session... Feb 20 04:40:18 localhost systemd[282475]: Stopped target Main User Target. Feb 20 04:40:18 localhost systemd[282475]: Stopped target Basic System. Feb 20 04:40:18 localhost systemd[282475]: Stopped target Paths. Feb 20 04:40:18 localhost systemd[282475]: Stopped target Sockets. Feb 20 04:40:18 localhost systemd[282475]: Stopped target Timers. Feb 20 04:40:18 localhost systemd[282475]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 20 04:40:18 localhost systemd[282475]: Stopped Daily Cleanup of User's Temporary Directories. Feb 20 04:40:18 localhost systemd[282475]: Closed D-Bus User Message Bus Socket. Feb 20 04:40:18 localhost systemd[282475]: Stopped Create User's Volatile Files and Directories. Feb 20 04:40:18 localhost systemd[282475]: Removed slice User Application Slice. Feb 20 04:40:18 localhost systemd[282475]: Reached target Shutdown. Feb 20 04:40:18 localhost systemd[282475]: Finished Exit the Session. Feb 20 04:40:18 localhost systemd[282475]: Reached target Exit the Session. Feb 20 04:40:18 localhost systemd[1]: user@1003.service: Deactivated successfully. Feb 20 04:40:18 localhost systemd[1]: Stopped User Manager for UID 1003. Feb 20 04:40:18 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Feb 20 04:40:18 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Feb 20 04:40:18 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Feb 20 04:40:18 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Feb 20 04:40:18 localhost systemd[1]: Removed slice User Slice of UID 1003. Feb 20 04:40:18 localhost systemd[1]: user-1003.slice: Consumed 1.662s CPU time. Feb 20 04:40:18 localhost podman[283871]: 2026-02-20 09:40:18.710029136 +0000 UTC m=+0.094567446 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.tags=minimal rhel9) Feb 20 04:40:18 localhost podman[283871]: 2026-02-20 09:40:18.722361624 +0000 UTC m=+0.106899944 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, version=9.7) Feb 20 04:40:18 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:40:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:40:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:40:22 localhost systemd[1]: tmp-crun.1IACYg.mount: Deactivated successfully. Feb 20 04:40:22 localhost podman[283893]: 2026-02-20 09:40:22.444429396 +0000 UTC m=+0.074860622 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Feb 20 04:40:22 localhost podman[283893]: 2026-02-20 09:40:22.450905048 +0000 UTC m=+0.081336324 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS) Feb 20 04:40:22 localhost podman[283892]: 2026-02-20 09:40:22.459183088 +0000 UTC m=+0.088296129 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:40:22 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:40:22 localhost podman[283892]: 2026-02-20 09:40:22.565780493 +0000 UTC m=+0.194893514 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2) Feb 20 04:40:22 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:40:23 localhost podman[284030]: Feb 20 04:40:23 localhost podman[284030]: 2026-02-20 09:40:23.491805712 +0000 UTC m=+0.084934770 container create f050a52f327bd5600a4f82e21856781f811b115ae894ccc1eddc5ee7ee8580d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_mahavira, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64) Feb 20 04:40:23 localhost systemd[1]: Started libpod-conmon-f050a52f327bd5600a4f82e21856781f811b115ae894ccc1eddc5ee7ee8580d3.scope. Feb 20 04:40:23 localhost podman[284030]: 2026-02-20 09:40:23.457660233 +0000 UTC m=+0.050789341 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:40:23 localhost systemd[1]: Started libcrun container. Feb 20 04:40:23 localhost podman[284030]: 2026-02-20 09:40:23.581801696 +0000 UTC m=+0.174930734 container init f050a52f327bd5600a4f82e21856781f811b115ae894ccc1eddc5ee7ee8580d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_mahavira, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, ceph=True, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 04:40:23 localhost systemd[1]: tmp-crun.5xC1VW.mount: Deactivated successfully. Feb 20 04:40:23 localhost podman[284030]: 2026-02-20 09:40:23.592454638 +0000 UTC m=+0.185583706 container start f050a52f327bd5600a4f82e21856781f811b115ae894ccc1eddc5ee7ee8580d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_mahavira, version=7, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, maintainer=Guillaume Abrioux , ceph=True, io.openshift.expose-services=, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, release=1770267347, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, architecture=x86_64) Feb 20 04:40:23 localhost podman[284030]: 2026-02-20 09:40:23.592754937 +0000 UTC m=+0.185884005 container attach f050a52f327bd5600a4f82e21856781f811b115ae894ccc1eddc5ee7ee8580d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_mahavira, GIT_BRANCH=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7) Feb 20 04:40:23 localhost modest_mahavira[284045]: 167 167 Feb 20 04:40:23 localhost systemd[1]: libpod-f050a52f327bd5600a4f82e21856781f811b115ae894ccc1eddc5ee7ee8580d3.scope: Deactivated successfully. Feb 20 04:40:23 localhost podman[284030]: 2026-02-20 09:40:23.596092496 +0000 UTC m=+0.189221544 container died f050a52f327bd5600a4f82e21856781f811b115ae894ccc1eddc5ee7ee8580d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_mahavira, version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, name=rhceph, ceph=True, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:40:23 localhost podman[284050]: 2026-02-20 09:40:23.676458882 +0000 UTC m=+0.069189310 container remove f050a52f327bd5600a4f82e21856781f811b115ae894ccc1eddc5ee7ee8580d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_mahavira, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z) Feb 20 04:40:23 localhost systemd[1]: libpod-conmon-f050a52f327bd5600a4f82e21856781f811b115ae894ccc1eddc5ee7ee8580d3.scope: Deactivated successfully. Feb 20 04:40:23 localhost podman[284072]: Feb 20 04:40:23 localhost podman[284072]: 2026-02-20 09:40:23.898253352 +0000 UTC m=+0.077662517 container create 872878a44572cf586c15680e92d705c3c74d0cee803cb7766c33439a513faf04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_hermann, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, io.openshift.expose-services=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main) Feb 20 04:40:23 localhost systemd[1]: Started libpod-conmon-872878a44572cf586c15680e92d705c3c74d0cee803cb7766c33439a513faf04.scope. Feb 20 04:40:23 localhost systemd[1]: Started libcrun container. Feb 20 04:40:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43b31ba8328e83842c48c821c633666e6cc7b0708ce64bb7a9c7db74b89b422e/merged/rootfs supports timestamps until 2038 (0x7fffffff) Feb 20 04:40:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43b31ba8328e83842c48c821c633666e6cc7b0708ce64bb7a9c7db74b89b422e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 04:40:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43b31ba8328e83842c48c821c633666e6cc7b0708ce64bb7a9c7db74b89b422e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 04:40:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43b31ba8328e83842c48c821c633666e6cc7b0708ce64bb7a9c7db74b89b422e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 04:40:23 localhost podman[284072]: 2026-02-20 09:40:23.964781751 +0000 UTC m=+0.144190906 container init 872878a44572cf586c15680e92d705c3c74d0cee803cb7766c33439a513faf04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_hermann, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, ceph=True, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1770267347, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=) Feb 20 04:40:23 localhost podman[284072]: 2026-02-20 09:40:23.869554088 +0000 UTC m=+0.048963263 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:40:23 localhost podman[284072]: 2026-02-20 09:40:23.972676721 +0000 UTC m=+0.152085866 container start 872878a44572cf586c15680e92d705c3c74d0cee803cb7766c33439a513faf04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_hermann, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=1770267347, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 04:40:23 localhost podman[284072]: 2026-02-20 09:40:23.972966138 +0000 UTC m=+0.152375323 container attach 872878a44572cf586c15680e92d705c3c74d0cee803cb7766c33439a513faf04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_hermann, io.buildah.version=1.42.2, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, distribution-scope=public, architecture=x86_64, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph) Feb 20 04:40:24 localhost systemd[1]: var-lib-containers-storage-overlay-8a1fc674440a82df1f15caec7c8b963b23068c4f751d8674a0510f275f5a9ffc-merged.mount: Deactivated successfully. Feb 20 04:40:24 localhost boring_hermann[284088]: [ Feb 20 04:40:24 localhost boring_hermann[284088]: { Feb 20 04:40:24 localhost boring_hermann[284088]: "available": false, Feb 20 04:40:24 localhost boring_hermann[284088]: "ceph_device": false, Feb 20 04:40:24 localhost boring_hermann[284088]: "device_id": "QEMU_DVD-ROM_QM00001", Feb 20 04:40:24 localhost boring_hermann[284088]: "lsm_data": {}, Feb 20 04:40:24 localhost boring_hermann[284088]: "lvs": [], Feb 20 04:40:24 localhost boring_hermann[284088]: "path": "/dev/sr0", Feb 20 04:40:24 localhost boring_hermann[284088]: "rejected_reasons": [ Feb 20 04:40:24 localhost boring_hermann[284088]: "Has a FileSystem", Feb 20 04:40:24 localhost boring_hermann[284088]: "Insufficient space (<5GB)" Feb 20 04:40:24 localhost boring_hermann[284088]: ], Feb 20 04:40:24 localhost boring_hermann[284088]: "sys_api": { Feb 20 04:40:24 localhost boring_hermann[284088]: "actuators": null, Feb 20 04:40:24 localhost boring_hermann[284088]: "device_nodes": "sr0", Feb 20 04:40:24 localhost boring_hermann[284088]: "human_readable_size": "482.00 KB", Feb 20 04:40:24 localhost boring_hermann[284088]: "id_bus": "ata", Feb 20 04:40:24 localhost boring_hermann[284088]: "model": "QEMU DVD-ROM", Feb 20 04:40:24 localhost boring_hermann[284088]: "nr_requests": "2", Feb 20 04:40:24 localhost boring_hermann[284088]: "partitions": {}, Feb 20 04:40:24 localhost boring_hermann[284088]: "path": "/dev/sr0", Feb 20 04:40:24 localhost boring_hermann[284088]: "removable": "1", Feb 20 04:40:24 localhost boring_hermann[284088]: "rev": "2.5+", Feb 20 04:40:24 localhost boring_hermann[284088]: "ro": "0", Feb 20 04:40:24 localhost boring_hermann[284088]: "rotational": "1", Feb 20 04:40:24 localhost boring_hermann[284088]: "sas_address": "", Feb 20 04:40:24 localhost boring_hermann[284088]: "sas_device_handle": "", Feb 20 04:40:24 localhost boring_hermann[284088]: "scheduler_mode": "mq-deadline", Feb 20 04:40:24 localhost boring_hermann[284088]: "sectors": 0, Feb 20 04:40:24 localhost boring_hermann[284088]: "sectorsize": "2048", Feb 20 04:40:24 localhost boring_hermann[284088]: "size": 493568.0, Feb 20 04:40:24 localhost boring_hermann[284088]: "support_discard": "0", Feb 20 04:40:24 localhost boring_hermann[284088]: "type": "disk", Feb 20 04:40:24 localhost boring_hermann[284088]: "vendor": "QEMU" Feb 20 04:40:24 localhost boring_hermann[284088]: } Feb 20 04:40:24 localhost boring_hermann[284088]: } Feb 20 04:40:24 localhost boring_hermann[284088]: ] Feb 20 04:40:24 localhost systemd[1]: libpod-872878a44572cf586c15680e92d705c3c74d0cee803cb7766c33439a513faf04.scope: Deactivated successfully. Feb 20 04:40:24 localhost podman[284072]: 2026-02-20 09:40:24.923821718 +0000 UTC m=+1.103230843 container died 872878a44572cf586c15680e92d705c3c74d0cee803cb7766c33439a513faf04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_hermann, vcs-type=git, version=7, release=1770267347, description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:40:25 localhost systemd[1]: var-lib-containers-storage-overlay-43b31ba8328e83842c48c821c633666e6cc7b0708ce64bb7a9c7db74b89b422e-merged.mount: Deactivated successfully. Feb 20 04:40:25 localhost podman[285953]: 2026-02-20 09:40:25.028247765 +0000 UTC m=+0.087928939 container remove 872878a44572cf586c15680e92d705c3c74d0cee803cb7766c33439a513faf04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_hermann, RELEASE=main, ceph=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, version=7, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git) Feb 20 04:40:25 localhost systemd[1]: libpod-conmon-872878a44572cf586c15680e92d705c3c74d0cee803cb7766c33439a513faf04.scope: Deactivated successfully. Feb 20 04:40:25 localhost sshd[285968]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:40:28 localhost openstack_network_exporter[243776]: ERROR 09:40:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:40:28 localhost openstack_network_exporter[243776]: Feb 20 04:40:28 localhost openstack_network_exporter[243776]: ERROR 09:40:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:40:28 localhost openstack_network_exporter[243776]: Feb 20 04:40:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:40:28 localhost podman[286008]: 2026-02-20 09:40:28.455858886 +0000 UTC m=+0.091551586 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:40:28 localhost podman[286008]: 2026-02-20 09:40:28.489351736 +0000 UTC m=+0.125044426 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:40:28 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:40:28 localhost sshd[286031]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:40:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:40:36 localhost podman[286033]: 2026-02-20 09:40:36.440438862 +0000 UTC m=+0.079242369 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:40:36 localhost podman[286033]: 2026-02-20 09:40:36.451872246 +0000 UTC m=+0.090675783 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:40:36 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:40:41 localhost nova_compute[280804]: 2026-02-20 09:40:41.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:40:41 localhost nova_compute[280804]: 2026-02-20 09:40:41.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 20 04:40:41 localhost nova_compute[280804]: 2026-02-20 09:40:41.527 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 20 04:40:41 localhost nova_compute[280804]: 2026-02-20 09:40:41.528 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:40:41 localhost nova_compute[280804]: 2026-02-20 09:40:41.528 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 20 04:40:41 localhost nova_compute[280804]: 2026-02-20 09:40:41.539 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:40:42 localhost nova_compute[280804]: 2026-02-20 09:40:42.547 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:40:42 localhost nova_compute[280804]: 2026-02-20 09:40:42.547 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:40:42 localhost nova_compute[280804]: 2026-02-20 09:40:42.548 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:40:42 localhost nova_compute[280804]: 2026-02-20 09:40:42.570 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:40:43 localhost nova_compute[280804]: 2026-02-20 09:40:43.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:40:43 localhost nova_compute[280804]: 2026-02-20 09:40:43.527 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:40:43 localhost nova_compute[280804]: 2026-02-20 09:40:43.528 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:40:43 localhost nova_compute[280804]: 2026-02-20 09:40:43.529 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:40:43 localhost nova_compute[280804]: 2026-02-20 09:40:43.529 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:40:43 localhost nova_compute[280804]: 2026-02-20 09:40:43.530 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:40:43 localhost nova_compute[280804]: 2026-02-20 09:40:43.974 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:40:44 localhost nova_compute[280804]: 2026-02-20 09:40:44.153 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:40:44 localhost nova_compute[280804]: 2026-02-20 09:40:44.154 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=12489MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:40:44 localhost nova_compute[280804]: 2026-02-20 09:40:44.155 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:40:44 localhost nova_compute[280804]: 2026-02-20 09:40:44.155 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:40:44 localhost nova_compute[280804]: 2026-02-20 09:40:44.327 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:40:44 localhost nova_compute[280804]: 2026-02-20 09:40:44.328 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:40:44 localhost nova_compute[280804]: 2026-02-20 09:40:44.417 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Refreshing inventories for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 20 04:40:44 localhost nova_compute[280804]: 2026-02-20 09:40:44.508 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Updating ProviderTree inventory for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 20 04:40:44 localhost nova_compute[280804]: 2026-02-20 09:40:44.509 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Updating inventory in ProviderTree for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 20 04:40:44 localhost nova_compute[280804]: 2026-02-20 09:40:44.540 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Refreshing aggregate associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 20 04:40:44 localhost nova_compute[280804]: 2026-02-20 09:40:44.565 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Refreshing trait associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, traits: COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 20 04:40:44 localhost nova_compute[280804]: 2026-02-20 09:40:44.584 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:40:45 localhost nova_compute[280804]: 2026-02-20 09:40:45.067 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:40:45 localhost nova_compute[280804]: 2026-02-20 09:40:45.074 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:40:45 localhost nova_compute[280804]: 2026-02-20 09:40:45.092 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:40:45 localhost nova_compute[280804]: 2026-02-20 09:40:45.095 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:40:45 localhost nova_compute[280804]: 2026-02-20 09:40:45.095 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:40:46 localhost nova_compute[280804]: 2026-02-20 09:40:46.099 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:40:46 localhost nova_compute[280804]: 2026-02-20 09:40:46.100 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:40:46 localhost nova_compute[280804]: 2026-02-20 09:40:46.101 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:40:46 localhost podman[241347]: time="2026-02-20T09:40:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:40:46 localhost podman[241347]: @ - - [20/Feb/2026:09:40:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153201 "" "Go-http-client/1.1" Feb 20 04:40:46 localhost podman[241347]: @ - - [20/Feb/2026:09:40:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17778 "" "Go-http-client/1.1" Feb 20 04:40:46 localhost nova_compute[280804]: 2026-02-20 09:40:46.512 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:40:46 localhost nova_compute[280804]: 2026-02-20 09:40:46.513 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:40:46 localhost nova_compute[280804]: 2026-02-20 09:40:46.513 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:40:47 localhost nova_compute[280804]: 2026-02-20 09:40:47.512 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:40:47 localhost nova_compute[280804]: 2026-02-20 09:40:47.513 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:40:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:40:48 localhost podman[286100]: 2026-02-20 09:40:48.448175933 +0000 UTC m=+0.086478874 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 20 04:40:48 localhost podman[286100]: 2026-02-20 09:40:48.462028375 +0000 UTC m=+0.100331336 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3) Feb 20 04:40:48 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:40:49 localhost podman[286119]: 2026-02-20 09:40:49.438875985 +0000 UTC m=+0.077512845 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9/ubi-minimal, config_id=openstack_network_exporter, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:40:49 localhost podman[286119]: 2026-02-20 09:40:49.456989612 +0000 UTC m=+0.095626472 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, version=9.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:40:49 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:40:51 localhost sshd[286137]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:40:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:40:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:40:52 localhost podman[286156]: 2026-02-20 09:40:52.652290929 +0000 UTC m=+0.091104721 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 20 04:40:52 localhost podman[286156]: 2026-02-20 09:40:52.664046555 +0000 UTC m=+0.102860307 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:40:52 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:40:52 localhost podman[286192]: 2026-02-20 09:40:52.751110065 +0000 UTC m=+0.088385456 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller) Feb 20 04:40:52 localhost podman[286192]: 2026-02-20 09:40:52.85842485 +0000 UTC m=+0.195700271 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:40:52 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:40:54 localhost sshd[286266]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:40:57 localhost podman[286383]: Feb 20 04:40:57 localhost podman[286383]: 2026-02-20 09:40:57.116715261 +0000 UTC m=+0.074701759 container create ee152f7fd39541c239e3f9bd1a53dbe0036b4722dc1d68d2cb9fdde61d83faec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_taussig, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, distribution-scope=public, release=1770267347, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, ceph=True, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , version=7) Feb 20 04:40:57 localhost systemd[1]: Started libpod-conmon-ee152f7fd39541c239e3f9bd1a53dbe0036b4722dc1d68d2cb9fdde61d83faec.scope. Feb 20 04:40:57 localhost systemd[1]: Started libcrun container. Feb 20 04:40:57 localhost podman[286383]: 2026-02-20 09:40:57.17991789 +0000 UTC m=+0.137904398 container init ee152f7fd39541c239e3f9bd1a53dbe0036b4722dc1d68d2cb9fdde61d83faec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_taussig, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7) Feb 20 04:40:57 localhost podman[286383]: 2026-02-20 09:40:57.086111589 +0000 UTC m=+0.044098107 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:40:57 localhost podman[286383]: 2026-02-20 09:40:57.190617578 +0000 UTC m=+0.148604086 container start ee152f7fd39541c239e3f9bd1a53dbe0036b4722dc1d68d2cb9fdde61d83faec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_taussig, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vcs-type=git, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, release=1770267347, GIT_CLEAN=True, name=rhceph, vendor=Red Hat, Inc., version=7, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64) Feb 20 04:40:57 localhost podman[286383]: 2026-02-20 09:40:57.190948287 +0000 UTC m=+0.148934785 container attach ee152f7fd39541c239e3f9bd1a53dbe0036b4722dc1d68d2cb9fdde61d83faec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_taussig, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container) Feb 20 04:40:57 localhost systemd[1]: libpod-ee152f7fd39541c239e3f9bd1a53dbe0036b4722dc1d68d2cb9fdde61d83faec.scope: Deactivated successfully. Feb 20 04:40:57 localhost flamboyant_taussig[286398]: 167 167 Feb 20 04:40:57 localhost podman[286383]: 2026-02-20 09:40:57.194928663 +0000 UTC m=+0.152915221 container died ee152f7fd39541c239e3f9bd1a53dbe0036b4722dc1d68d2cb9fdde61d83faec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_taussig, description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:40:57 localhost podman[286403]: 2026-02-20 09:40:57.287559763 +0000 UTC m=+0.083081453 container remove ee152f7fd39541c239e3f9bd1a53dbe0036b4722dc1d68d2cb9fdde61d83faec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_taussig, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., ceph=True, build-date=2026-02-09T10:25:24Z, name=rhceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.buildah.version=1.42.2, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:40:57 localhost systemd[1]: libpod-conmon-ee152f7fd39541c239e3f9bd1a53dbe0036b4722dc1d68d2cb9fdde61d83faec.scope: Deactivated successfully. Feb 20 04:40:57 localhost systemd[1]: Reloading. Feb 20 04:40:57 localhost systemd-rc-local-generator[286443]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:40:57 localhost systemd-sysv-generator[286449]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: var-lib-containers-storage-overlay-f4a31f99f9cad2cc026a6047821b3ae947ea43204336b940e656449ad9ccdb8b-merged.mount: Deactivated successfully. Feb 20 04:40:57 localhost systemd[1]: Reloading. Feb 20 04:40:57 localhost systemd-sysv-generator[286489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:40:57 localhost systemd-rc-local-generator[286486]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:57 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:40:58 localhost systemd[1]: Starting Ceph mgr.np0005625202.arwxwo for a8557ee9-b55d-5519-942c-cf8f6172f1d8... Feb 20 04:40:58 localhost openstack_network_exporter[243776]: ERROR 09:40:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:40:58 localhost openstack_network_exporter[243776]: Feb 20 04:40:58 localhost openstack_network_exporter[243776]: ERROR 09:40:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:40:58 localhost openstack_network_exporter[243776]: Feb 20 04:40:58 localhost podman[286547]: Feb 20 04:40:58 localhost podman[286547]: 2026-02-20 09:40:58.46405899 +0000 UTC m=+0.077938685 container create ea9b41e1afbbd1f6954445614ef7496fd138ccad3979da346f09ef779dbcf6aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, vcs-type=git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container) Feb 20 04:40:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57808eb2b34a0d8757e4edcf6ed8089a612471a644d894c5faadb27303e202b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 04:40:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57808eb2b34a0d8757e4edcf6ed8089a612471a644d894c5faadb27303e202b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 04:40:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57808eb2b34a0d8757e4edcf6ed8089a612471a644d894c5faadb27303e202b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 04:40:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57808eb2b34a0d8757e4edcf6ed8089a612471a644d894c5faadb27303e202b3/merged/var/lib/ceph/mgr/ceph-np0005625202.arwxwo supports timestamps until 2038 (0x7fffffff) Feb 20 04:40:58 localhost podman[286547]: 2026-02-20 09:40:58.52466335 +0000 UTC m=+0.138543045 container init ea9b41e1afbbd1f6954445614ef7496fd138ccad3979da346f09ef779dbcf6aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., release=1770267347, GIT_BRANCH=main, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, build-date=2026-02-09T10:25:24Z, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:40:58 localhost podman[286547]: 2026-02-20 09:40:58.431339811 +0000 UTC m=+0.045219516 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:40:58 localhost podman[286547]: 2026-02-20 09:40:58.531901585 +0000 UTC m=+0.145781290 container start ea9b41e1afbbd1f6954445614ef7496fd138ccad3979da346f09ef779dbcf6aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, maintainer=Guillaume Abrioux ) Feb 20 04:40:58 localhost bash[286547]: ea9b41e1afbbd1f6954445614ef7496fd138ccad3979da346f09ef779dbcf6aa Feb 20 04:40:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:40:58 localhost systemd[1]: Started Ceph mgr.np0005625202.arwxwo for a8557ee9-b55d-5519-942c-cf8f6172f1d8. Feb 20 04:40:58 localhost ceph-mgr[286565]: set uid:gid to 167:167 (ceph:ceph) Feb 20 04:40:58 localhost ceph-mgr[286565]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mgr, pid 2 Feb 20 04:40:58 localhost ceph-mgr[286565]: pidfile_write: ignore empty --pid-file Feb 20 04:40:58 localhost ceph-mgr[286565]: mgr[py] Loading python module 'alerts' Feb 20 04:40:58 localhost podman[286566]: 2026-02-20 09:40:58.648808557 +0000 UTC m=+0.089229299 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:40:58 localhost podman[286566]: 2026-02-20 09:40:58.684887437 +0000 UTC m=+0.125308219 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:40:58 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:40:58 localhost ceph-mgr[286565]: mgr[py] Module alerts has missing NOTIFY_TYPES member Feb 20 04:40:58 localhost ceph-mgr[286565]: mgr[py] Loading python module 'balancer' Feb 20 04:40:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:40:58.699+0000 7f0d6863c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Feb 20 04:40:58 localhost ceph-mgr[286565]: mgr[py] Module balancer has missing NOTIFY_TYPES member Feb 20 04:40:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:40:58.768+0000 7f0d6863c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Feb 20 04:40:58 localhost ceph-mgr[286565]: mgr[py] Loading python module 'cephadm' Feb 20 04:40:59 localhost ceph-mgr[286565]: mgr[py] Loading python module 'crash' Feb 20 04:40:59 localhost ceph-mgr[286565]: mgr[py] Module crash has missing NOTIFY_TYPES member Feb 20 04:40:59 localhost ceph-mgr[286565]: mgr[py] Loading python module 'dashboard' Feb 20 04:40:59 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:40:59.458+0000 7f0d6863c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Feb 20 04:40:59 localhost ceph-mgr[286565]: mgr[py] Loading python module 'devicehealth' Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Feb 20 04:41:00 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:00.016+0000 7f0d6863c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Loading python module 'diskprediction_local' Feb 20 04:41:00 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Feb 20 04:41:00 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Feb 20 04:41:00 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: from numpy import show_config as show_numpy_config Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Loading python module 'influx' Feb 20 04:41:00 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:00.153+0000 7f0d6863c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Module influx has missing NOTIFY_TYPES member Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Loading python module 'insights' Feb 20 04:41:00 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:00.213+0000 7f0d6863c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Loading python module 'iostat' Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Module iostat has missing NOTIFY_TYPES member Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Loading python module 'k8sevents' Feb 20 04:41:00 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:00.325+0000 7f0d6863c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Loading python module 'localpool' Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Loading python module 'mds_autoscaler' Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Loading python module 'mirroring' Feb 20 04:41:00 localhost ceph-mgr[286565]: mgr[py] Loading python module 'nfs' Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Module nfs has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Loading python module 'orchestrator' Feb 20 04:41:01 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:01.104+0000 7f0d6863c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Loading python module 'osd_perf_query' Feb 20 04:41:01 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:01.254+0000 7f0d6863c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Loading python module 'osd_support' Feb 20 04:41:01 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:01.320+0000 7f0d6863c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Loading python module 'pg_autoscaler' Feb 20 04:41:01 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:01.380+0000 7f0d6863c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Loading python module 'progress' Feb 20 04:41:01 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:01.449+0000 7f0d6863c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Module progress has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Loading python module 'prometheus' Feb 20 04:41:01 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:01.510+0000 7f0d6863c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Loading python module 'rbd_support' Feb 20 04:41:01 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:01.811+0000 7f0d6863c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Feb 20 04:41:01 localhost ceph-mgr[286565]: mgr[py] Loading python module 'restful' Feb 20 04:41:01 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:01.895+0000 7f0d6863c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Loading python module 'rgw' Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Module rgw has missing NOTIFY_TYPES member Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Loading python module 'rook' Feb 20 04:41:02 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:02.235+0000 7f0d6863c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Module rook has missing NOTIFY_TYPES member Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Loading python module 'selftest' Feb 20 04:41:02 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:02.623+0000 7f0d6863c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Module selftest has missing NOTIFY_TYPES member Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Loading python module 'snap_schedule' Feb 20 04:41:02 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:02.683+0000 7f0d6863c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Loading python module 'stats' Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Loading python module 'status' Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Module status has missing NOTIFY_TYPES member Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Loading python module 'telegraf' Feb 20 04:41:02 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:02.871+0000 7f0d6863c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Feb 20 04:41:02 localhost ceph-mgr[286565]: mgr[py] Loading python module 'telemetry' Feb 20 04:41:02 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:02.932+0000 7f0d6863c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Feb 20 04:41:03 localhost ceph-mgr[286565]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Feb 20 04:41:03 localhost ceph-mgr[286565]: mgr[py] Loading python module 'test_orchestrator' Feb 20 04:41:03 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:03.061+0000 7f0d6863c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Feb 20 04:41:03 localhost ceph-mgr[286565]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Feb 20 04:41:03 localhost ceph-mgr[286565]: mgr[py] Loading python module 'volumes' Feb 20 04:41:03 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:03.207+0000 7f0d6863c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Feb 20 04:41:03 localhost ceph-mgr[286565]: mgr[py] Module volumes has missing NOTIFY_TYPES member Feb 20 04:41:03 localhost ceph-mgr[286565]: mgr[py] Loading python module 'zabbix' Feb 20 04:41:03 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:03.415+0000 7f0d6863c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Feb 20 04:41:03 localhost ceph-mgr[286565]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Feb 20 04:41:03 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:41:03.483+0000 7f0d6863c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Feb 20 04:41:03 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x5628569bf1e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Feb 20 04:41:03 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.103:6800/1308191220 Feb 20 04:41:04 localhost podman[286744]: 2026-02-20 09:41:04.868053583 +0000 UTC m=+0.090993947 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, RELEASE=main, version=7, name=rhceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, release=1770267347, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_CLEAN=True) Feb 20 04:41:05 localhost podman[286744]: 2026-02-20 09:41:05.001758418 +0000 UTC m=+0.224698792 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, version=7, com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vendor=Red Hat, Inc.) Feb 20 04:41:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:41:05.907 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:41:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:41:05.908 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:41:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:41:05.908 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:41:07 localhost podman[286901]: 2026-02-20 09:41:07.443785214 +0000 UTC m=+0.084719048 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:41:07 localhost podman[286901]: 2026-02-20 09:41:07.454295497 +0000 UTC m=+0.095229321 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:41:07 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:41:16 localhost podman[241347]: time="2026-02-20T09:41:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:41:16 localhost podman[241347]: @ - - [20/Feb/2026:09:41:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155393 "" "Go-http-client/1.1" Feb 20 04:41:16 localhost podman[241347]: @ - - [20/Feb/2026:09:41:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18256 "" "Go-http-client/1.1" Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:41:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:41:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:41:19 localhost podman[287600]: 2026-02-20 09:41:19.443837039 +0000 UTC m=+0.081363160 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:41:19 localhost podman[287600]: 2026-02-20 09:41:19.461553355 +0000 UTC m=+0.099079486 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:41:19 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:41:20 localhost podman[287620]: 2026-02-20 09:41:20.404370439 +0000 UTC m=+0.052775279 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, managed_by=edpm_ansible, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.7, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 04:41:20 localhost podman[287620]: 2026-02-20 09:41:20.440195702 +0000 UTC m=+0.088600592 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, release=1770267347, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 20 04:41:20 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:41:20 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x5628569bf1e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Feb 20 04:41:23 localhost sshd[287641]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:41:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:41:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:41:23 localhost podman[287643]: 2026-02-20 09:41:23.424903087 +0000 UTC m=+0.064943307 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:41:23 localhost podman[287643]: 2026-02-20 09:41:23.459175759 +0000 UTC m=+0.099216009 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible) Feb 20 04:41:23 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:41:23 localhost podman[287644]: 2026-02-20 09:41:23.550062431 +0000 UTC m=+0.186296598 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent) Feb 20 04:41:23 localhost podman[287644]: 2026-02-20 09:41:23.579997656 +0000 UTC m=+0.216231793 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:41:23 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:41:26 localhost podman[287763]: Feb 20 04:41:26 localhost podman[287763]: 2026-02-20 09:41:26.385856434 +0000 UTC m=+0.077174896 container create f54fb9d31d04ac75cfec2ee880e14e445d3f63637524aef1be541ac303c37307 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_moore, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, GIT_BRANCH=main, io.buildah.version=1.42.2, distribution-scope=public, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, com.redhat.component=rhceph-container) Feb 20 04:41:26 localhost systemd[1]: Started libpod-conmon-f54fb9d31d04ac75cfec2ee880e14e445d3f63637524aef1be541ac303c37307.scope. Feb 20 04:41:26 localhost systemd[1]: Started libcrun container. Feb 20 04:41:26 localhost podman[287763]: 2026-02-20 09:41:26.353902904 +0000 UTC m=+0.045221376 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:41:26 localhost podman[287763]: 2026-02-20 09:41:26.454313284 +0000 UTC m=+0.145631736 container init f54fb9d31d04ac75cfec2ee880e14e445d3f63637524aef1be541ac303c37307 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_moore, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public) Feb 20 04:41:26 localhost podman[287763]: 2026-02-20 09:41:26.466682196 +0000 UTC m=+0.158000658 container start f54fb9d31d04ac75cfec2ee880e14e445d3f63637524aef1be541ac303c37307 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_moore, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.buildah.version=1.42.2, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z) Feb 20 04:41:26 localhost amazing_moore[287778]: 167 167 Feb 20 04:41:26 localhost systemd[1]: libpod-f54fb9d31d04ac75cfec2ee880e14e445d3f63637524aef1be541ac303c37307.scope: Deactivated successfully. Feb 20 04:41:26 localhost podman[287763]: 2026-02-20 09:41:26.467333854 +0000 UTC m=+0.158652336 container attach f54fb9d31d04ac75cfec2ee880e14e445d3f63637524aef1be541ac303c37307 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_moore, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_BRANCH=main) Feb 20 04:41:26 localhost podman[287763]: 2026-02-20 09:41:26.471597478 +0000 UTC m=+0.162915930 container died f54fb9d31d04ac75cfec2ee880e14e445d3f63637524aef1be541ac303c37307 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_moore, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, ceph=True, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, architecture=x86_64, io.openshift.expose-services=, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, release=1770267347, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:41:26 localhost podman[287783]: 2026-02-20 09:41:26.56798936 +0000 UTC m=+0.089007784 container remove f54fb9d31d04ac75cfec2ee880e14e445d3f63637524aef1be541ac303c37307 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_moore, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, release=1770267347, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64) Feb 20 04:41:26 localhost systemd[1]: libpod-conmon-f54fb9d31d04ac75cfec2ee880e14e445d3f63637524aef1be541ac303c37307.scope: Deactivated successfully. Feb 20 04:41:26 localhost podman[287799]: Feb 20 04:41:26 localhost podman[287799]: 2026-02-20 09:41:26.681872111 +0000 UTC m=+0.078697867 container create ed2e8112cb4707a57a29b36c08fccd22c291d89552f678f03371c4ee21628c6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_noether, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, RELEASE=main, architecture=x86_64, build-date=2026-02-09T10:25:24Z, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:41:26 localhost systemd[1]: Started libpod-conmon-ed2e8112cb4707a57a29b36c08fccd22c291d89552f678f03371c4ee21628c6a.scope. Feb 20 04:41:26 localhost systemd[1]: Started libcrun container. Feb 20 04:41:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c1d12dc7c956adf404e9a2cb2602eb28e9ede94109a17f84c7ce4af4aac55f1/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Feb 20 04:41:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c1d12dc7c956adf404e9a2cb2602eb28e9ede94109a17f84c7ce4af4aac55f1/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Feb 20 04:41:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c1d12dc7c956adf404e9a2cb2602eb28e9ede94109a17f84c7ce4af4aac55f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 04:41:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c1d12dc7c956adf404e9a2cb2602eb28e9ede94109a17f84c7ce4af4aac55f1/merged/var/lib/ceph/mon/ceph-np0005625202 supports timestamps until 2038 (0x7fffffff) Feb 20 04:41:26 localhost podman[287799]: 2026-02-20 09:41:26.748871092 +0000 UTC m=+0.145696848 container init ed2e8112cb4707a57a29b36c08fccd22c291d89552f678f03371c4ee21628c6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_noether, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, version=7, distribution-scope=public) Feb 20 04:41:26 localhost podman[287799]: 2026-02-20 09:41:26.651409562 +0000 UTC m=+0.048235378 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:41:26 localhost podman[287799]: 2026-02-20 09:41:26.758569443 +0000 UTC m=+0.155395209 container start ed2e8112cb4707a57a29b36c08fccd22c291d89552f678f03371c4ee21628c6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_noether, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhceph ceph, release=1770267347, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main) Feb 20 04:41:26 localhost podman[287799]: 2026-02-20 09:41:26.75882446 +0000 UTC m=+0.155650246 container attach ed2e8112cb4707a57a29b36c08fccd22c291d89552f678f03371c4ee21628c6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_noether, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, release=1770267347, architecture=x86_64, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:41:26 localhost systemd[1]: libpod-ed2e8112cb4707a57a29b36c08fccd22c291d89552f678f03371c4ee21628c6a.scope: Deactivated successfully. Feb 20 04:41:26 localhost podman[287799]: 2026-02-20 09:41:26.862447556 +0000 UTC m=+0.259273342 container died ed2e8112cb4707a57a29b36c08fccd22c291d89552f678f03371c4ee21628c6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_noether, com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.openshift.expose-services=) Feb 20 04:41:26 localhost podman[287840]: 2026-02-20 09:41:26.955281591 +0000 UTC m=+0.082554071 container remove ed2e8112cb4707a57a29b36c08fccd22c291d89552f678f03371c4ee21628c6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_noether, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, version=7, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , ceph=True, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:41:26 localhost systemd[1]: libpod-conmon-ed2e8112cb4707a57a29b36c08fccd22c291d89552f678f03371c4ee21628c6a.scope: Deactivated successfully. Feb 20 04:41:27 localhost systemd[1]: Reloading. Feb 20 04:41:27 localhost systemd-rc-local-generator[287881]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:41:27 localhost systemd-sysv-generator[287884]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:41:27 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x5628569bef20 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: var-lib-containers-storage-overlay-69dc7de7997b71bde11943977def5e1ab490122a9d3422547a062a1d2a30be77-merged.mount: Deactivated successfully. Feb 20 04:41:27 localhost systemd[1]: Reloading. Feb 20 04:41:27 localhost systemd-rc-local-generator[287921]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:41:27 localhost systemd-sysv-generator[287926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:41:27 localhost systemd[1]: Starting Ceph mon.np0005625202 for a8557ee9-b55d-5519-942c-cf8f6172f1d8... Feb 20 04:41:28 localhost podman[287984]: Feb 20 04:41:28 localhost podman[287984]: 2026-02-20 09:41:28.106238671 +0000 UTC m=+0.059109570 container create 29aac7b9c05c78e0de9f028449b60b7edbb2dd4f67b10bbf243ea89d24959273 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625202, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_BRANCH=main, io.buildah.version=1.42.2, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=1770267347, maintainer=Guillaume Abrioux , name=rhceph) Feb 20 04:41:28 localhost openstack_network_exporter[243776]: ERROR 09:41:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:41:28 localhost openstack_network_exporter[243776]: Feb 20 04:41:28 localhost openstack_network_exporter[243776]: ERROR 09:41:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:41:28 localhost openstack_network_exporter[243776]: Feb 20 04:41:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fafcb32677e5b477d44815a1775d6515f3325caf8ed1a33a7ee0ad3a9a456b2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 04:41:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fafcb32677e5b477d44815a1775d6515f3325caf8ed1a33a7ee0ad3a9a456b2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 04:41:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fafcb32677e5b477d44815a1775d6515f3325caf8ed1a33a7ee0ad3a9a456b2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 04:41:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fafcb32677e5b477d44815a1775d6515f3325caf8ed1a33a7ee0ad3a9a456b2a/merged/var/lib/ceph/mon/ceph-np0005625202 supports timestamps until 2038 (0x7fffffff) Feb 20 04:41:28 localhost podman[287984]: 2026-02-20 09:41:28.087250841 +0000 UTC m=+0.040121740 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:41:28 localhost podman[287984]: 2026-02-20 09:41:28.189983833 +0000 UTC m=+0.142854732 container init 29aac7b9c05c78e0de9f028449b60b7edbb2dd4f67b10bbf243ea89d24959273 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625202, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, RELEASE=main, distribution-scope=public, version=7, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, ceph=True) Feb 20 04:41:28 localhost podman[287984]: 2026-02-20 09:41:28.203009842 +0000 UTC m=+0.155880741 container start 29aac7b9c05c78e0de9f028449b60b7edbb2dd4f67b10bbf243ea89d24959273 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625202, GIT_CLEAN=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , release=1770267347, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:41:28 localhost bash[287984]: 29aac7b9c05c78e0de9f028449b60b7edbb2dd4f67b10bbf243ea89d24959273 Feb 20 04:41:28 localhost systemd[1]: Started Ceph mon.np0005625202 for a8557ee9-b55d-5519-942c-cf8f6172f1d8. Feb 20 04:41:28 localhost ceph-mon[288002]: set uid:gid to 167:167 (ceph:ceph) Feb 20 04:41:28 localhost ceph-mon[288002]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mon, pid 2 Feb 20 04:41:28 localhost ceph-mon[288002]: pidfile_write: ignore empty --pid-file Feb 20 04:41:28 localhost ceph-mon[288002]: load: jerasure load: lrc Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: RocksDB version: 7.9.2 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Git sha 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Compile date 2026-02-06 00:00:00 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: DB SUMMARY Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: DB Session ID: DGD9G3SA2V4RN5JSQNL3 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: CURRENT file: CURRENT Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: IDENTITY file: IDENTITY Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005625202/store.db dir, Total Num: 0, files: Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005625202/store.db: 000004.log size: 886 ; Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.error_if_exists: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.create_if_missing: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.paranoid_checks: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.env: 0x5633d65f6a20 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.fs: PosixFileSystem Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.info_log: 0x5633d7d12d20 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_file_opening_threads: 16 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.statistics: (nil) Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.use_fsync: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_log_file_size: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.log_file_time_to_roll: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.keep_log_file_num: 1000 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.recycle_log_file_num: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.allow_fallocate: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.allow_mmap_reads: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.allow_mmap_writes: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.use_direct_reads: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.create_missing_column_families: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.db_log_dir: Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.wal_dir: Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.table_cache_numshardbits: 6 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.advise_random_on_open: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.db_write_buffer_size: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.write_buffer_manager: 0x5633d7d23540 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.use_adaptive_mutex: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.rate_limiter: (nil) Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.wal_recovery_mode: 2 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.enable_thread_tracking: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.enable_pipelined_write: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.unordered_write: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.row_cache: None Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.wal_filter: None Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.allow_ingest_behind: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.two_write_queues: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.manual_wal_flush: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.wal_compression: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.atomic_flush: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.persist_stats_to_disk: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.log_readahead_size: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.best_efforts_recovery: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.allow_data_in_errors: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.db_host_id: __hostname__ Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.enforce_single_del_contracts: true Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_background_jobs: 2 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_background_compactions: -1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_subcompactions: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.delayed_write_rate : 16777216 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_total_wal_size: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.stats_dump_period_sec: 600 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.stats_persist_period_sec: 600 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_open_files: -1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bytes_per_sync: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_readahead_size: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_background_flushes: -1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Compression algorithms supported: Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: #011kZSTD supported: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: #011kXpressCompression supported: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: #011kBZip2Compression supported: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: #011kLZ4Compression supported: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: #011kZlibCompression supported: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: #011kSnappyCompression supported: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: DMutex implementation: pthread_mutex_t Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005625202/store.db/MANIFEST-000005 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.merge_operator: Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_filter: None Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_filter_factory: None Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.sst_partitioner_factory: None Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633d7d12980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5633d7d0f350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.write_buffer_size: 33554432 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_write_buffer_number: 2 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compression: NoCompression Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bottommost_compression: Disabled Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.prefix_extractor: nullptr Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.num_levels: 7 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compression_opts.level: 32767 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compression_opts.enabled: false Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_bytes_for_level_base: 268435456 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.arena_block_size: 1048576 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.table_properties_collectors: Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.inplace_update_support: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.bloom_locality: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.max_successive_merges: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.force_consistency_checks: 1 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.ttl: 2592000 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.enable_blob_files: false Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.min_blob_size: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.blob_file_size: 268435456 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005625202/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ac9d4b37-b337-41da-8ec8-28a5abace8ee Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580488240400, "job": 1, "event": "recovery_started", "wal_files": [4]} Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580488242529, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 2012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 898, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 776, "raw_average_value_size": 155, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580488, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ac9d4b37-b337-41da-8ec8-28a5abace8ee", "db_session_id": "DGD9G3SA2V4RN5JSQNL3", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580488242672, "job": 1, "event": "recovery_finished"} Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5633d7d36e00 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: DB pointer 0x5633d7e2c000 Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:41:28 localhost ceph-mon[288002]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.96 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.9 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Sum 1/0 1.96 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.9 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.9 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.9 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5633d7d0f350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625202 does not exist in monmap, will attempt to join an existing cluster Feb 20 04:41:28 localhost ceph-mon[288002]: using public_addr v2:172.18.0.106:0/0 -> [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] Feb 20 04:41:28 localhost ceph-mon[288002]: starting mon.np0005625202 rank -1 at public addrs [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] at bind addrs [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005625202 fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625202@-1(???) e0 preinit fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625202@-1(synchronizing) e5 sync_obtain_latest_monmap Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625202@-1(synchronizing) e5 sync_obtain_latest_monmap obtained monmap e5 Feb 20 04:41:28 localhost systemd[1]: tmp-crun.PjdO0T.mount: Deactivated successfully. Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625202@-1(synchronizing).mds e17 new map Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625202@-1(synchronizing).mds e17 print_map#012e17#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01116#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-20T07:58:28.398421+0000#012modified#0112026-02-20T09:40:14.722031+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01183#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26854}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26854 members: 26854#012[mds.mds.np0005625203.zsrwgk{0:26854} state up:active seq 13 addr [v2:172.18.0.107:6808/3334119751,v1:172.18.0.107:6809/3334119751] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005625202.akhmop{-1:17124} state up:standby seq 1 addr [v2:172.18.0.106:6808/3865978972,v1:172.18.0.106:6809/3865978972] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005625204.wnsphl{-1:26848} state up:standby seq 1 addr [v2:172.18.0.108:6808/2508223371,v1:172.18.0.108:6809/2508223371] compat {c=[1],r=[1],i=[17ff]}] Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625202@-1(synchronizing).osd e84 crush map has features 3314933000852226048, adjusting msgr requires Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625202@-1(synchronizing).osd e84 crush map has features 288514051259236352, adjusting msgr requires Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625202@-1(synchronizing).osd e84 crush map has features 288514051259236352, adjusting msgr requires Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625202@-1(synchronizing).osd e84 crush map has features 288514051259236352, adjusting msgr requires Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label mgr to host np0005625202.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label mgr to host np0005625203.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label mgr to host np0005625204.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Saving service mgr spec with placement label:mgr Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Feb 20 04:41:28 localhost ceph-mon[288002]: Deploying daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Feb 20 04:41:28 localhost ceph-mon[288002]: Deploying daemon mgr.np0005625203.lonygy on np0005625203.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label mon to host np0005625199.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label _admin to host np0005625199.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Feb 20 04:41:28 localhost ceph-mon[288002]: Deploying daemon mgr.np0005625204.exgrzx on np0005625204.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label mon to host np0005625200.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label _admin to host np0005625200.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label mon to host np0005625201.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label _admin to host np0005625201.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label mon to host np0005625202.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: Added label _admin to host np0005625202.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:28 localhost ceph-mon[288002]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label mon to host np0005625203.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label _admin to host np0005625203.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label mon to host np0005625204.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:28 localhost ceph-mon[288002]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Added label _admin to host np0005625204.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:28 localhost ceph-mon[288002]: Saving service mon spec with placement label:mon Feb 20 04:41:28 localhost ceph-mon[288002]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:28 localhost ceph-mon[288002]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:41:28 localhost ceph-mon[288002]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: Deploying daemon mon.np0005625204 on np0005625204.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625201 calling monitor election Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625199 calling monitor election Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625200 calling monitor election Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625204 calling monitor election Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625199 is new leader, mons np0005625199,np0005625201,np0005625200,np0005625204 in quorum (ranks 0,1,2,3) Feb 20 04:41:28 localhost ceph-mon[288002]: overall HEALTH_OK Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:28 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:41:28 localhost ceph-mon[288002]: Deploying daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:41:28 localhost ceph-mon[288002]: mon.np0005625202@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3 Feb 20 04:41:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:41:29 localhost podman[288041]: 2026-02-20 09:41:29.433376256 +0000 UTC m=+0.070940447 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:41:29 localhost podman[288041]: 2026-02-20 09:41:29.447420674 +0000 UTC m=+0.084984875 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:41:29 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:41:31 localhost sshd[288063]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:41:32 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x5628569bf1e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Feb 20 04:41:34 localhost systemd[1]: tmp-crun.yNQHgO.mount: Deactivated successfully. Feb 20 04:41:34 localhost podman[288190]: 2026-02-20 09:41:34.35164578 +0000 UTC m=+0.103150984 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, distribution-scope=public, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, io.openshift.expose-services=, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vcs-type=git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.) Feb 20 04:41:34 localhost ceph-mon[288002]: mon.np0005625202@-1(probing) e6 my rank is now 5 (was -1) Feb 20 04:41:34 localhost ceph-mon[288002]: log_channel(cluster) log [INF] : mon.np0005625202 calling monitor election Feb 20 04:41:34 localhost ceph-mon[288002]: paxos.5).electionLogic(0) init, first boot, initializing epoch at 1 Feb 20 04:41:34 localhost ceph-mon[288002]: mon.np0005625202@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:41:34 localhost podman[288190]: 2026-02-20 09:41:34.488038657 +0000 UTC m=+0.239543861 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64) Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625202@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625202@5(peon) e6 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625202@5(peon) e6 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625200 calling monitor election Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625201 calling monitor election Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625199 calling monitor election Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625204 calling monitor election Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625199 is new leader, mons np0005625199,np0005625201,np0005625200,np0005625204 in quorum (ranks 0,1,2,3) Feb 20 04:41:37 localhost ceph-mon[288002]: Health check failed: 1/5 mons down, quorum np0005625199,np0005625201,np0005625200,np0005625204 (MON_DOWN) Feb 20 04:41:37 localhost ceph-mon[288002]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005625199,np0005625201,np0005625200,np0005625204 Feb 20 04:41:37 localhost ceph-mon[288002]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005625199,np0005625201,np0005625200,np0005625204 Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625203 (rank 4) addr [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] is down (out of quorum) Feb 20 04:41:37 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:37 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625202@5(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:41:37 localhost ceph-mon[288002]: mgrc update_daemon_metadata mon.np0005625202 metadata {addrs=[v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable),ceph_version_short=18.2.1-381.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005625202.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005625202.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625203 calling monitor election Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625201 calling monitor election Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625199 calling monitor election Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625204 calling monitor election Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625200 calling monitor election Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625203 calling monitor election Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625202 calling monitor election Feb 20 04:41:37 localhost ceph-mon[288002]: mon.np0005625199 is new leader, mons np0005625199,np0005625201,np0005625200,np0005625204,np0005625203,np0005625202 in quorum (ranks 0,1,2,3,4,5) Feb 20 04:41:37 localhost ceph-mon[288002]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005625199,np0005625201,np0005625200,np0005625204) Feb 20 04:41:37 localhost ceph-mon[288002]: Cluster is now healthy Feb 20 04:41:37 localhost ceph-mon[288002]: overall HEALTH_OK Feb 20 04:41:37 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:37 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:41:37 localhost systemd[1]: tmp-crun.TB1z9e.mount: Deactivated successfully. Feb 20 04:41:37 localhost podman[288326]: 2026-02-20 09:41:37.86960975 +0000 UTC m=+0.100176425 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:41:37 localhost podman[288326]: 2026-02-20 09:41:37.910740726 +0000 UTC m=+0.141307401 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:41:37 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:41:39 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:39 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:39 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:39 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:39 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:39 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:41:39 localhost ceph-mon[288002]: Updating np0005625199.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:39 localhost ceph-mon[288002]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:39 localhost ceph-mon[288002]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:39 localhost ceph-mon[288002]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:39 localhost ceph-mon[288002]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:39 localhost ceph-mon[288002]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:40 localhost ceph-mon[288002]: Updating np0005625199.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:40 localhost ceph-mon[288002]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:40 localhost ceph-mon[288002]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:40 localhost ceph-mon[288002]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:40 localhost ceph-mon[288002]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:40 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:40 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:40 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:40 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:40 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:40 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:40 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:40 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:40 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:40 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:41 localhost ceph-mon[288002]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:41 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:41 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:41 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:41 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:41:42 localhost ceph-mon[288002]: Reconfiguring mon.np0005625199 (monmap changed)... Feb 20 04:41:42 localhost ceph-mon[288002]: Reconfiguring daemon mon.np0005625199 on np0005625199.localdomain Feb 20 04:41:42 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:43 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:43 localhost ceph-mon[288002]: Reconfiguring mgr.np0005625199.ileebh (monmap changed)... Feb 20 04:41:43 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625199.ileebh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:41:43 localhost ceph-mon[288002]: Reconfiguring daemon mgr.np0005625199.ileebh on np0005625199.localdomain Feb 20 04:41:43 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:43 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:44 localhost nova_compute[280804]: 2026-02-20 09:41:44.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:41:44 localhost nova_compute[280804]: 2026-02-20 09:41:44.510 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:41:44 localhost nova_compute[280804]: 2026-02-20 09:41:44.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:41:44 localhost nova_compute[280804]: 2026-02-20 09:41:44.535 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:41:44 localhost nova_compute[280804]: 2026-02-20 09:41:44.536 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:41:44 localhost nova_compute[280804]: 2026-02-20 09:41:44.556 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:41:44 localhost nova_compute[280804]: 2026-02-20 09:41:44.557 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:41:44 localhost nova_compute[280804]: 2026-02-20 09:41:44.558 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:41:44 localhost nova_compute[280804]: 2026-02-20 09:41:44.558 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:41:44 localhost nova_compute[280804]: 2026-02-20 09:41:44.559 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:41:44 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:44 localhost ceph-mon[288002]: Reconfiguring crash.np0005625199 (monmap changed)... Feb 20 04:41:44 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625199", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:41:44 localhost ceph-mon[288002]: Reconfiguring daemon crash.np0005625199 on np0005625199.localdomain Feb 20 04:41:44 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:44 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:44 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.016 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.215 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.217 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11995MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.217 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.218 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.300 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.301 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.316 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.826 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.833 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.848 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.850 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:41:45 localhost nova_compute[280804]: 2026-02-20 09:41:45.851 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:41:45 localhost ceph-mon[288002]: Reconfiguring crash.np0005625200 (monmap changed)... Feb 20 04:41:45 localhost ceph-mon[288002]: Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain Feb 20 04:41:45 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:45 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' Feb 20 04:41:45 localhost ceph-mon[288002]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:41:46 localhost ceph-mon[288002]: mon.np0005625202@5(peon).osd e84 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Feb 20 04:41:46 localhost ceph-mon[288002]: mon.np0005625202@5(peon).osd e84 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Feb 20 04:41:46 localhost ceph-mon[288002]: mon.np0005625202@5(peon).osd e85 e85: 6 total, 6 up, 6 in Feb 20 04:41:46 localhost systemd[1]: session-14.scope: Deactivated successfully. Feb 20 04:41:46 localhost systemd[1]: session-18.scope: Deactivated successfully. Feb 20 04:41:46 localhost systemd[1]: session-22.scope: Deactivated successfully. Feb 20 04:41:46 localhost systemd[1]: session-26.scope: Deactivated successfully. Feb 20 04:41:46 localhost systemd[1]: session-26.scope: Consumed 3min 21.051s CPU time. Feb 20 04:41:46 localhost systemd[1]: session-16.scope: Deactivated successfully. Feb 20 04:41:46 localhost systemd[1]: session-23.scope: Deactivated successfully. Feb 20 04:41:46 localhost systemd[1]: session-25.scope: Deactivated successfully. Feb 20 04:41:46 localhost systemd[1]: session-20.scope: Deactivated successfully. Feb 20 04:41:46 localhost systemd[1]: session-19.scope: Deactivated successfully. Feb 20 04:41:46 localhost systemd[1]: session-17.scope: Deactivated successfully. Feb 20 04:41:46 localhost systemd[1]: session-21.scope: Deactivated successfully. Feb 20 04:41:46 localhost systemd[1]: session-24.scope: Deactivated successfully. Feb 20 04:41:46 localhost systemd-logind[760]: Session 22 logged out. Waiting for processes to exit. Feb 20 04:41:46 localhost systemd-logind[760]: Session 14 logged out. Waiting for processes to exit. Feb 20 04:41:46 localhost systemd-logind[760]: Session 18 logged out. Waiting for processes to exit. Feb 20 04:41:46 localhost podman[241347]: time="2026-02-20T09:41:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:41:46 localhost systemd-logind[760]: Session 21 logged out. Waiting for processes to exit. Feb 20 04:41:46 localhost systemd-logind[760]: Session 19 logged out. Waiting for processes to exit. Feb 20 04:41:46 localhost systemd-logind[760]: Session 25 logged out. Waiting for processes to exit. Feb 20 04:41:46 localhost systemd-logind[760]: Session 20 logged out. Waiting for processes to exit. Feb 20 04:41:46 localhost systemd-logind[760]: Session 23 logged out. Waiting for processes to exit. Feb 20 04:41:46 localhost systemd-logind[760]: Session 16 logged out. Waiting for processes to exit. Feb 20 04:41:46 localhost systemd-logind[760]: Session 17 logged out. Waiting for processes to exit. Feb 20 04:41:46 localhost systemd-logind[760]: Session 26 logged out. Waiting for processes to exit. Feb 20 04:41:46 localhost systemd-logind[760]: Session 24 logged out. Waiting for processes to exit. Feb 20 04:41:46 localhost systemd-logind[760]: Removed session 14. Feb 20 04:41:46 localhost systemd-logind[760]: Removed session 18. Feb 20 04:41:46 localhost systemd-logind[760]: Removed session 22. Feb 20 04:41:46 localhost systemd-logind[760]: Removed session 26. Feb 20 04:41:46 localhost systemd-logind[760]: Removed session 16. Feb 20 04:41:46 localhost podman[241347]: @ - - [20/Feb/2026:09:41:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:41:46 localhost systemd-logind[760]: Removed session 23. Feb 20 04:41:46 localhost systemd-logind[760]: Removed session 25. Feb 20 04:41:46 localhost systemd-logind[760]: Removed session 20. Feb 20 04:41:46 localhost systemd-logind[760]: Removed session 19. Feb 20 04:41:46 localhost systemd-logind[760]: Removed session 17. Feb 20 04:41:46 localhost systemd-logind[760]: Removed session 21. Feb 20 04:41:46 localhost systemd-logind[760]: Removed session 24. Feb 20 04:41:46 localhost podman[241347]: @ - - [20/Feb/2026:09:41:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18743 "" "Go-http-client/1.1" Feb 20 04:41:46 localhost sshd[288713]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:41:46 localhost systemd-logind[760]: New session 64 of user ceph-admin. Feb 20 04:41:46 localhost systemd[1]: Started Session 64 of User ceph-admin. Feb 20 04:41:46 localhost nova_compute[280804]: 2026-02-20 09:41:46.826 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:41:46 localhost nova_compute[280804]: 2026-02-20 09:41:46.827 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:41:46 localhost nova_compute[280804]: 2026-02-20 09:41:46.828 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:41:46 localhost nova_compute[280804]: 2026-02-20 09:41:46.828 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:41:46 localhost ceph-mon[288002]: from='client.? 172.18.0.103:0/2662030267' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 20 04:41:46 localhost ceph-mon[288002]: Activating manager daemon np0005625201.mtnyvu Feb 20 04:41:46 localhost ceph-mon[288002]: from='client.? 172.18.0.103:0/2662030267' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 20 04:41:46 localhost ceph-mon[288002]: Manager daemon np0005625201.mtnyvu is now available Feb 20 04:41:46 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625201.mtnyvu/mirror_snapshot_schedule"} : dispatch Feb 20 04:41:46 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625201.mtnyvu/mirror_snapshot_schedule"} : dispatch Feb 20 04:41:46 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625201.mtnyvu/trash_purge_schedule"} : dispatch Feb 20 04:41:46 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625201.mtnyvu/trash_purge_schedule"} : dispatch Feb 20 04:41:47 localhost nova_compute[280804]: 2026-02-20 09:41:47.506 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:41:47 localhost nova_compute[280804]: 2026-02-20 09:41:47.520 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:41:47 localhost nova_compute[280804]: 2026-02-20 09:41:47.520 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:41:47 localhost podman[288824]: 2026-02-20 09:41:47.600684321 +0000 UTC m=+0.085377196 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, version=7, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., release=1770267347, vcs-type=git, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph) Feb 20 04:41:47 localhost podman[288824]: 2026-02-20 09:41:47.702004625 +0000 UTC m=+0.186697510 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, RELEASE=main, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc.) Feb 20 04:41:48 localhost ceph-mon[288002]: mon.np0005625202@5(peon).osd e85 _set_new_cache_sizes cache_size:1019698582 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:41:48 localhost nova_compute[280804]: 2026-02-20 09:41:48.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:41:48 localhost sshd[288980]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:41:49 localhost ceph-mon[288002]: [20/Feb/2026:09:41:47] ENGINE Bus STARTING Feb 20 04:41:49 localhost ceph-mon[288002]: [20/Feb/2026:09:41:47] ENGINE Serving on http://172.18.0.105:8765 Feb 20 04:41:49 localhost ceph-mon[288002]: [20/Feb/2026:09:41:48] ENGINE Serving on https://172.18.0.105:7150 Feb 20 04:41:49 localhost ceph-mon[288002]: [20/Feb/2026:09:41:48] ENGINE Bus STARTED Feb 20 04:41:49 localhost ceph-mon[288002]: [20/Feb/2026:09:41:48] ENGINE Client ('172.18.0.105', 35862) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 20 04:41:49 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:49 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:49 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:49 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:49 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:49 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:49 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:49 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:49 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:49 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:49 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:49 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:49 localhost nova_compute[280804]: 2026-02-20 09:41:49.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:41:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:41:50 localhost podman[289087]: 2026-02-20 09:41:50.170720878 +0000 UTC m=+0.081125641 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Feb 20 04:41:50 localhost podman[289087]: 2026-02-20 09:41:50.210783205 +0000 UTC m=+0.121187978 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute) Feb 20 04:41:50 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:41:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:41:50 localhost podman[289213]: 2026-02-20 09:41:50.620047657 +0000 UTC m=+0.064921066 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., io.buildah.version=1.33.7) Feb 20 04:41:50 localhost podman[289213]: 2026-02-20 09:41:50.632678907 +0000 UTC m=+0.077552236 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1770267347, architecture=x86_64, build-date=2026-02-05T04:57:10Z, version=9.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 20 04:41:50 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd/host:np0005625199", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd/host:np0005625199", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:41:50 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:41:51 localhost sshd[289520]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:41:51 localhost ceph-mon[288002]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:41:51 localhost ceph-mon[288002]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:41:51 localhost ceph-mon[288002]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:41:51 localhost ceph-mon[288002]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:41:51 localhost ceph-mon[288002]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:41:51 localhost ceph-mon[288002]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:41:51 localhost ceph-mon[288002]: Updating np0005625199.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:51 localhost ceph-mon[288002]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:51 localhost ceph-mon[288002]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:51 localhost ceph-mon[288002]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:51 localhost ceph-mon[288002]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:51 localhost ceph-mon[288002]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:41:51 localhost ceph-mon[288002]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:51 localhost ceph-mon[288002]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:51 localhost ceph-mon[288002]: Updating np0005625199.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:51 localhost ceph-mon[288002]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:51 localhost ceph-mon[288002]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:51 localhost ceph-mon[288002]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:41:52 localhost ceph-mon[288002]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:41:52 localhost ceph-mon[288002]: Updating np0005625199.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:41:52 localhost ceph-mon[288002]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:41:52 localhost ceph-mon[288002]: Updating np0005625200.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:41:52 localhost ceph-mon[288002]: Updating np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:41:52 localhost ceph-mon[288002]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:41:52 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:52 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:52 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:53 localhost ceph-mon[288002]: mon.np0005625202@5(peon).osd e85 _set_new_cache_sizes cache_size:1020046875 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:41:53 localhost ceph-mon[288002]: Updating np0005625199.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:41:53 localhost ceph-mon[288002]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:41:53 localhost ceph-mon[288002]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:41:53 localhost ceph-mon[288002]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:41:53 localhost ceph-mon[288002]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:41:53 localhost ceph-mon[288002]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:41:53 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:53 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:53 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:53 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:53 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:53 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:53 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:53 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:53 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:53 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:53 localhost ceph-mon[288002]: Reconfiguring mon.np0005625200 (monmap changed)... Feb 20 04:41:53 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:41:53 localhost ceph-mon[288002]: Reconfiguring daemon mon.np0005625200 on np0005625200.localdomain Feb 20 04:41:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:41:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:41:54 localhost podman[289771]: 2026-02-20 09:41:54.448104664 +0000 UTC m=+0.080575697 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Feb 20 04:41:54 localhost podman[289772]: 2026-02-20 09:41:54.55100249 +0000 UTC m=+0.177123293 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 04:41:54 localhost podman[289771]: 2026-02-20 09:41:54.566715572 +0000 UTC m=+0.199186545 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:41:54 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:41:54 localhost podman[289772]: 2026-02-20 09:41:54.583779191 +0000 UTC m=+0.209899964 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Feb 20 04:41:54 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:41:55 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:55 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:55 localhost ceph-mon[288002]: Reconfiguring mgr.np0005625200.ypbkax (monmap changed)... Feb 20 04:41:55 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:41:55 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:41:55 localhost ceph-mon[288002]: Reconfiguring daemon mgr.np0005625200.ypbkax on np0005625200.localdomain Feb 20 04:41:55 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:55 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:55 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:41:56 localhost ceph-mon[288002]: Reconfiguring mon.np0005625201 (monmap changed)... Feb 20 04:41:56 localhost ceph-mon[288002]: Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain Feb 20 04:41:56 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:56 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:56 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:41:56 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:41:56 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:57 localhost ceph-mon[288002]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)... Feb 20 04:41:57 localhost ceph-mon[288002]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain Feb 20 04:41:57 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:57 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:57 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:41:57 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:41:58 localhost openstack_network_exporter[243776]: ERROR 09:41:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:41:58 localhost openstack_network_exporter[243776]: Feb 20 04:41:58 localhost openstack_network_exporter[243776]: ERROR 09:41:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:41:58 localhost openstack_network_exporter[243776]: Feb 20 04:41:58 localhost ceph-mon[288002]: mon.np0005625202@5(peon).osd e85 _set_new_cache_sizes cache_size:1020054557 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:41:58 localhost ceph-mon[288002]: Reconfiguring crash.np0005625201 (monmap changed)... Feb 20 04:41:58 localhost ceph-mon[288002]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain Feb 20 04:41:58 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:58 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:58 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:41:58 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:41:58 localhost podman[289866]: Feb 20 04:41:58 localhost podman[289866]: 2026-02-20 09:41:58.799436457 +0000 UTC m=+0.079655312 container create 7a17055b17fd854caca7f3bad5f37ed09b1ff93eb4f85acd0d7683e5365f8dd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_vaughan, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , RELEASE=main, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z) Feb 20 04:41:58 localhost systemd[1]: Started libpod-conmon-7a17055b17fd854caca7f3bad5f37ed09b1ff93eb4f85acd0d7683e5365f8dd2.scope. Feb 20 04:41:58 localhost systemd[1]: Started libcrun container. Feb 20 04:41:58 localhost podman[289866]: 2026-02-20 09:41:58.861241719 +0000 UTC m=+0.141460574 container init 7a17055b17fd854caca7f3bad5f37ed09b1ff93eb4f85acd0d7683e5365f8dd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_vaughan, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.42.2, version=7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:41:58 localhost podman[289866]: 2026-02-20 09:41:58.766507342 +0000 UTC m=+0.046726177 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:41:58 localhost podman[289866]: 2026-02-20 09:41:58.874373922 +0000 UTC m=+0.154592787 container start 7a17055b17fd854caca7f3bad5f37ed09b1ff93eb4f85acd0d7683e5365f8dd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_vaughan, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, RELEASE=main, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.expose-services=) Feb 20 04:41:58 localhost podman[289866]: 2026-02-20 09:41:58.874641419 +0000 UTC m=+0.154860274 container attach 7a17055b17fd854caca7f3bad5f37ed09b1ff93eb4f85acd0d7683e5365f8dd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_vaughan, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, name=rhceph, release=1770267347, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:41:58 localhost sleepy_vaughan[289881]: 167 167 Feb 20 04:41:58 localhost systemd[1]: libpod-7a17055b17fd854caca7f3bad5f37ed09b1ff93eb4f85acd0d7683e5365f8dd2.scope: Deactivated successfully. Feb 20 04:41:58 localhost podman[289866]: 2026-02-20 09:41:58.879687625 +0000 UTC m=+0.159906500 container died 7a17055b17fd854caca7f3bad5f37ed09b1ff93eb4f85acd0d7683e5365f8dd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_vaughan, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-type=git, release=1770267347, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux , version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, architecture=x86_64) Feb 20 04:41:58 localhost podman[289886]: 2026-02-20 09:41:58.980962227 +0000 UTC m=+0.091896871 container remove 7a17055b17fd854caca7f3bad5f37ed09b1ff93eb4f85acd0d7683e5365f8dd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_vaughan, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, ceph=True, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , architecture=x86_64, name=rhceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Feb 20 04:41:58 localhost systemd[1]: libpod-conmon-7a17055b17fd854caca7f3bad5f37ed09b1ff93eb4f85acd0d7683e5365f8dd2.scope: Deactivated successfully. Feb 20 04:41:58 localhost sshd[289900]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:41:59 localhost ceph-mon[288002]: Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:41:59 localhost ceph-mon[288002]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:41:59 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:59 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:41:59 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:41:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:41:59 localhost podman[289947]: 2026-02-20 09:41:59.718559895 +0000 UTC m=+0.095263582 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:41:59 localhost podman[289947]: 2026-02-20 09:41:59.732941691 +0000 UTC m=+0.109645328 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:41:59 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:41:59 localhost podman[289968]: Feb 20 04:41:59 localhost podman[289968]: 2026-02-20 09:41:59.790861439 +0000 UTC m=+0.082799468 container create 14e997c53e4f33e70184a761700618c805e1c413b13a260b7647792b033d87cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_dubinsky, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1770267347, RELEASE=main, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:41:59 localhost systemd[1]: var-lib-containers-storage-overlay-3bfba0d0ee7a7f7cbe7cb06189ddc42fc6d56a7b9eb76ef44ac01592ee93d7a2-merged.mount: Deactivated successfully. Feb 20 04:41:59 localhost systemd[1]: Started libpod-conmon-14e997c53e4f33e70184a761700618c805e1c413b13a260b7647792b033d87cf.scope. Feb 20 04:41:59 localhost systemd[1]: Started libcrun container. Feb 20 04:41:59 localhost podman[289968]: 2026-02-20 09:41:59.758729075 +0000 UTC m=+0.050667134 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:41:59 localhost podman[289968]: 2026-02-20 09:41:59.865060794 +0000 UTC m=+0.156998823 container init 14e997c53e4f33e70184a761700618c805e1c413b13a260b7647792b033d87cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_dubinsky, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, architecture=x86_64, version=7, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:41:59 localhost podman[289968]: 2026-02-20 09:41:59.876414569 +0000 UTC m=+0.168352598 container start 14e997c53e4f33e70184a761700618c805e1c413b13a260b7647792b033d87cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_dubinsky, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, ceph=True, GIT_CLEAN=True, version=7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=) Feb 20 04:41:59 localhost podman[289968]: 2026-02-20 09:41:59.876685806 +0000 UTC m=+0.168623845 container attach 14e997c53e4f33e70184a761700618c805e1c413b13a260b7647792b033d87cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_dubinsky, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, release=1770267347, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:41:59 localhost trusting_dubinsky[289996]: 167 167 Feb 20 04:41:59 localhost systemd[1]: libpod-14e997c53e4f33e70184a761700618c805e1c413b13a260b7647792b033d87cf.scope: Deactivated successfully. Feb 20 04:41:59 localhost podman[289968]: 2026-02-20 09:41:59.88093559 +0000 UTC m=+0.172873619 container died 14e997c53e4f33e70184a761700618c805e1c413b13a260b7647792b033d87cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_dubinsky, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, build-date=2026-02-09T10:25:24Z, name=rhceph, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, CEPH_POINT_RELEASE=) Feb 20 04:41:59 localhost podman[290001]: 2026-02-20 09:41:59.984876444 +0000 UTC m=+0.090349060 container remove 14e997c53e4f33e70184a761700618c805e1c413b13a260b7647792b033d87cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_dubinsky, vendor=Red Hat, Inc., release=1770267347, description=Red Hat Ceph Storage 7, version=7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:41:59 localhost systemd[1]: libpod-conmon-14e997c53e4f33e70184a761700618c805e1c413b13a260b7647792b033d87cf.scope: Deactivated successfully. Feb 20 04:42:00 localhost ceph-mon[288002]: Reconfiguring osd.2 (monmap changed)... Feb 20 04:42:00 localhost ceph-mon[288002]: Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:42:00 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:00 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:00 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:42:00 localhost podman[290077]: Feb 20 04:42:00 localhost podman[290077]: 2026-02-20 09:42:00.801872346 +0000 UTC m=+0.072053258 container create 6612cdb550d58ea48b578162913535ce1a2d672a8dc7ff4b2b4a4ed40530d27d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_engelbart, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, build-date=2026-02-09T10:25:24Z, vcs-type=git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, release=1770267347, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:42:00 localhost systemd[1]: var-lib-containers-storage-overlay-188ac8d5cd302d317b0735055f8e1ccaba19e9f417d6d2c8a4f3ef0e50ccaf52-merged.mount: Deactivated successfully. Feb 20 04:42:00 localhost systemd[1]: Started libpod-conmon-6612cdb550d58ea48b578162913535ce1a2d672a8dc7ff4b2b4a4ed40530d27d.scope. Feb 20 04:42:00 localhost systemd[1]: Started libcrun container. Feb 20 04:42:00 localhost podman[290077]: 2026-02-20 09:42:00.868544349 +0000 UTC m=+0.138725271 container init 6612cdb550d58ea48b578162913535ce1a2d672a8dc7ff4b2b4a4ed40530d27d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_engelbart, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, name=rhceph, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z) Feb 20 04:42:00 localhost podman[290077]: 2026-02-20 09:42:00.773300169 +0000 UTC m=+0.043481101 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:00 localhost podman[290077]: 2026-02-20 09:42:00.879424181 +0000 UTC m=+0.149605103 container start 6612cdb550d58ea48b578162913535ce1a2d672a8dc7ff4b2b4a4ed40530d27d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_engelbart, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1770267347, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.42.2, architecture=x86_64, name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=) Feb 20 04:42:00 localhost podman[290077]: 2026-02-20 09:42:00.879939465 +0000 UTC m=+0.150120387 container attach 6612cdb550d58ea48b578162913535ce1a2d672a8dc7ff4b2b4a4ed40530d27d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_engelbart, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, version=7, io.buildah.version=1.42.2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:42:00 localhost optimistic_engelbart[290093]: 167 167 Feb 20 04:42:00 localhost systemd[1]: libpod-6612cdb550d58ea48b578162913535ce1a2d672a8dc7ff4b2b4a4ed40530d27d.scope: Deactivated successfully. Feb 20 04:42:00 localhost podman[290077]: 2026-02-20 09:42:00.882448833 +0000 UTC m=+0.152629765 container died 6612cdb550d58ea48b578162913535ce1a2d672a8dc7ff4b2b4a4ed40530d27d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_engelbart, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=1770267347, name=rhceph, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.42.2, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:42:00 localhost podman[290098]: 2026-02-20 09:42:00.981920556 +0000 UTC m=+0.089419435 container remove 6612cdb550d58ea48b578162913535ce1a2d672a8dc7ff4b2b4a4ed40530d27d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_engelbart, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, name=rhceph, ceph=True, release=1770267347, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Feb 20 04:42:00 localhost systemd[1]: libpod-conmon-6612cdb550d58ea48b578162913535ce1a2d672a8dc7ff4b2b4a4ed40530d27d.scope: Deactivated successfully. Feb 20 04:42:01 localhost ceph-mon[288002]: Reconfiguring osd.5 (monmap changed)... Feb 20 04:42:01 localhost ceph-mon[288002]: Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:42:01 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:01 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:01 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:42:01 localhost ceph-mon[288002]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:42:01 localhost systemd[1]: var-lib-containers-storage-overlay-4ba4901122c8f7f5881d774e2078d77d609f99e1a329a65ea6b4ab0c2835590d-merged.mount: Deactivated successfully. Feb 20 04:42:01 localhost podman[290174]: Feb 20 04:42:01 localhost podman[290174]: 2026-02-20 09:42:01.821951848 +0000 UTC m=+0.079941549 container create 2d928f02266b503525b036c0869c386fc01b641701bc6eb5b1a4cb1417050f8c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_margulis, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, distribution-scope=public, release=1770267347, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, version=7) Feb 20 04:42:01 localhost systemd[1]: Started libpod-conmon-2d928f02266b503525b036c0869c386fc01b641701bc6eb5b1a4cb1417050f8c.scope. Feb 20 04:42:01 localhost systemd[1]: Started libcrun container. Feb 20 04:42:01 localhost podman[290174]: 2026-02-20 09:42:01.787706578 +0000 UTC m=+0.045696319 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:01 localhost podman[290174]: 2026-02-20 09:42:01.893949724 +0000 UTC m=+0.151939425 container init 2d928f02266b503525b036c0869c386fc01b641701bc6eb5b1a4cb1417050f8c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_margulis, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, name=rhceph, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, GIT_CLEAN=True) Feb 20 04:42:01 localhost podman[290174]: 2026-02-20 09:42:01.903908821 +0000 UTC m=+0.161898522 container start 2d928f02266b503525b036c0869c386fc01b641701bc6eb5b1a4cb1417050f8c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_margulis, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, io.openshift.expose-services=, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.component=rhceph-container, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:42:01 localhost podman[290174]: 2026-02-20 09:42:01.904184209 +0000 UTC m=+0.162173920 container attach 2d928f02266b503525b036c0869c386fc01b641701bc6eb5b1a4cb1417050f8c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_margulis, com.redhat.component=rhceph-container, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:42:01 localhost inspiring_margulis[290189]: 167 167 Feb 20 04:42:01 localhost systemd[1]: libpod-2d928f02266b503525b036c0869c386fc01b641701bc6eb5b1a4cb1417050f8c.scope: Deactivated successfully. Feb 20 04:42:01 localhost podman[290174]: 2026-02-20 09:42:01.907021655 +0000 UTC m=+0.165011386 container died 2d928f02266b503525b036c0869c386fc01b641701bc6eb5b1a4cb1417050f8c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_margulis, GIT_BRANCH=main, ceph=True, architecture=x86_64, RELEASE=main, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux ) Feb 20 04:42:02 localhost podman[290194]: 2026-02-20 09:42:02.007526177 +0000 UTC m=+0.086749673 container remove 2d928f02266b503525b036c0869c386fc01b641701bc6eb5b1a4cb1417050f8c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_margulis, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 04:42:02 localhost systemd[1]: libpod-conmon-2d928f02266b503525b036c0869c386fc01b641701bc6eb5b1a4cb1417050f8c.scope: Deactivated successfully. Feb 20 04:42:02 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x5628569bf600 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Feb 20 04:42:02 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0 Feb 20 04:42:02 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0 Feb 20 04:42:02 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Feb 20 04:42:02 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Feb 20 04:42:02 localhost ceph-mon[288002]: mon.np0005625202@5(peon) e7 my rank is now 4 (was 5) Feb 20 04:42:02 localhost ceph-mon[288002]: log_channel(cluster) log [INF] : mon.np0005625202 calling monitor election Feb 20 04:42:02 localhost ceph-mon[288002]: paxos.4).electionLogic(24) init, last seen epoch 24 Feb 20 04:42:02 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x5628569bf080 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Feb 20 04:42:02 localhost ceph-mon[288002]: mon.np0005625202@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:42:02 localhost ceph-mon[288002]: mon.np0005625202@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:42:02 localhost ceph-mon[288002]: mon.np0005625202@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:42:02 localhost ceph-mon[288002]: mon.np0005625202@4(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:42:02 localhost ceph-mon[288002]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:42:02 localhost ceph-mon[288002]: Remove daemons mon.np0005625199 Feb 20 04:42:02 localhost ceph-mon[288002]: Safe to remove mon.np0005625199: new quorum should be ['np0005625201', 'np0005625200', 'np0005625204', 'np0005625203', 'np0005625202'] (from ['np0005625201', 'np0005625200', 'np0005625204', 'np0005625203', 'np0005625202']) Feb 20 04:42:02 localhost ceph-mon[288002]: Removing monitor np0005625199 from monmap... Feb 20 04:42:02 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon rm", "name": "np0005625199"} : dispatch Feb 20 04:42:02 localhost ceph-mon[288002]: Removing daemon mon.np0005625199 from np0005625199.localdomain -- ports [] Feb 20 04:42:02 localhost ceph-mon[288002]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:42:02 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:02 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:02 localhost ceph-mon[288002]: mon.np0005625200 calling monitor election Feb 20 04:42:02 localhost ceph-mon[288002]: mon.np0005625202 calling monitor election Feb 20 04:42:02 localhost ceph-mon[288002]: mon.np0005625204 calling monitor election Feb 20 04:42:02 localhost ceph-mon[288002]: mon.np0005625203 calling monitor election Feb 20 04:42:02 localhost ceph-mon[288002]: mon.np0005625201 calling monitor election Feb 20 04:42:02 localhost ceph-mon[288002]: mon.np0005625201 is new leader, mons np0005625201,np0005625200,np0005625204,np0005625203,np0005625202 in quorum (ranks 0,1,2,3,4) Feb 20 04:42:02 localhost ceph-mon[288002]: overall HEALTH_OK Feb 20 04:42:02 localhost systemd[1]: var-lib-containers-storage-overlay-2b0f28f1ec7d1b451ef5f91fb3d8c909faf685143baf8caee45a901ff04a1908-merged.mount: Deactivated successfully. Feb 20 04:42:03 localhost podman[290263]: Feb 20 04:42:03 localhost podman[290263]: 2026-02-20 09:42:03.038179733 +0000 UTC m=+0.086190438 container create 39b40c044f92d5414a116c5161557d7f73148174757666085c21f40a8bd971d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_wright, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, RELEASE=main, vendor=Red Hat, Inc., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:42:03 localhost systemd[1]: Started libpod-conmon-39b40c044f92d5414a116c5161557d7f73148174757666085c21f40a8bd971d9.scope. Feb 20 04:42:03 localhost systemd[1]: Started libcrun container. Feb 20 04:42:03 localhost podman[290263]: 2026-02-20 09:42:03.004462887 +0000 UTC m=+0.052473592 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:03 localhost podman[290263]: 2026-02-20 09:42:03.105645966 +0000 UTC m=+0.153656671 container init 39b40c044f92d5414a116c5161557d7f73148174757666085c21f40a8bd971d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_wright, maintainer=Guillaume Abrioux , name=rhceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2026-02-09T10:25:24Z, release=1770267347, RELEASE=main, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Feb 20 04:42:03 localhost podman[290263]: 2026-02-20 09:42:03.115023909 +0000 UTC m=+0.163034604 container start 39b40c044f92d5414a116c5161557d7f73148174757666085c21f40a8bd971d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_wright, io.openshift.expose-services=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 04:42:03 localhost podman[290263]: 2026-02-20 09:42:03.115303787 +0000 UTC m=+0.163314522 container attach 39b40c044f92d5414a116c5161557d7f73148174757666085c21f40a8bd971d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_wright, io.openshift.expose-services=, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, release=1770267347, name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.42.2, version=7) Feb 20 04:42:03 localhost strange_wright[290278]: 167 167 Feb 20 04:42:03 localhost systemd[1]: libpod-39b40c044f92d5414a116c5161557d7f73148174757666085c21f40a8bd971d9.scope: Deactivated successfully. Feb 20 04:42:03 localhost podman[290263]: 2026-02-20 09:42:03.118808931 +0000 UTC m=+0.166819636 container died 39b40c044f92d5414a116c5161557d7f73148174757666085c21f40a8bd971d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_wright, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, distribution-scope=public, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:42:03 localhost podman[290283]: 2026-02-20 09:42:03.222152639 +0000 UTC m=+0.093397762 container remove 39b40c044f92d5414a116c5161557d7f73148174757666085c21f40a8bd971d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_wright, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.openshift.expose-services=, ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=7, name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z) Feb 20 04:42:03 localhost systemd[1]: libpod-conmon-39b40c044f92d5414a116c5161557d7f73148174757666085c21f40a8bd971d9.scope: Deactivated successfully. Feb 20 04:42:03 localhost ceph-mon[288002]: mon.np0005625202@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054728 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:42:03 localhost ceph-mon[288002]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:42:03 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:03 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:03 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:42:03 localhost systemd[1]: var-lib-containers-storage-overlay-35b6e1484121ae3dbc1deaa7efdf0fdecb58ebd521d928f15101457f8b8e5a81-merged.mount: Deactivated successfully. Feb 20 04:42:03 localhost podman[290353]: Feb 20 04:42:03 localhost podman[290353]: 2026-02-20 09:42:03.989719803 +0000 UTC m=+0.075065849 container create 04d6bde1ba3e1cc6e8359800563f505bc8bfd6028201ca40fc3c021764f5bc99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_euler, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, RELEASE=main, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, ceph=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, version=7, architecture=x86_64, build-date=2026-02-09T10:25:24Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux ) Feb 20 04:42:04 localhost systemd[1]: Started libpod-conmon-04d6bde1ba3e1cc6e8359800563f505bc8bfd6028201ca40fc3c021764f5bc99.scope. Feb 20 04:42:04 localhost systemd[1]: Started libcrun container. Feb 20 04:42:04 localhost podman[290353]: 2026-02-20 09:42:03.960771865 +0000 UTC m=+0.046117911 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:04 localhost podman[290353]: 2026-02-20 09:42:04.06289004 +0000 UTC m=+0.148236076 container init 04d6bde1ba3e1cc6e8359800563f505bc8bfd6028201ca40fc3c021764f5bc99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_euler, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, release=1770267347, distribution-scope=public) Feb 20 04:42:04 localhost podman[290353]: 2026-02-20 09:42:04.073148486 +0000 UTC m=+0.158494522 container start 04d6bde1ba3e1cc6e8359800563f505bc8bfd6028201ca40fc3c021764f5bc99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_euler, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, maintainer=Guillaume Abrioux , RELEASE=main, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True) Feb 20 04:42:04 localhost podman[290353]: 2026-02-20 09:42:04.073417963 +0000 UTC m=+0.158764049 container attach 04d6bde1ba3e1cc6e8359800563f505bc8bfd6028201ca40fc3c021764f5bc99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_euler, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, RELEASE=main, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7) Feb 20 04:42:04 localhost objective_euler[290368]: 167 167 Feb 20 04:42:04 localhost systemd[1]: libpod-04d6bde1ba3e1cc6e8359800563f505bc8bfd6028201ca40fc3c021764f5bc99.scope: Deactivated successfully. Feb 20 04:42:04 localhost podman[290353]: 2026-02-20 09:42:04.076023353 +0000 UTC m=+0.161369399 container died 04d6bde1ba3e1cc6e8359800563f505bc8bfd6028201ca40fc3c021764f5bc99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_euler, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.42.2, GIT_BRANCH=main, name=rhceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True) Feb 20 04:42:04 localhost podman[290373]: 2026-02-20 09:42:04.176979907 +0000 UTC m=+0.088080980 container remove 04d6bde1ba3e1cc6e8359800563f505bc8bfd6028201ca40fc3c021764f5bc99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_euler, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, GIT_CLEAN=True, vcs-type=git, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, release=1770267347, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:42:04 localhost systemd[1]: libpod-conmon-04d6bde1ba3e1cc6e8359800563f505bc8bfd6028201ca40fc3c021764f5bc99.scope: Deactivated successfully. Feb 20 04:42:04 localhost ceph-mon[288002]: Reconfiguring mon.np0005625202 (monmap changed)... Feb 20 04:42:04 localhost ceph-mon[288002]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:42:04 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:04 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:04 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:04 localhost systemd[1]: var-lib-containers-storage-overlay-7c8371fc9682267284102aa90f5baf2ef47db4f5a820055f8f92d9708d11e6d2-merged.mount: Deactivated successfully. Feb 20 04:42:05 localhost ceph-mon[288002]: Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:42:05 localhost ceph-mon[288002]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:42:05 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:05 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:05 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:42:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:42:05.908 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:42:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:42:05.909 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:42:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:42:05.909 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:42:06 localhost ceph-mon[288002]: Reconfiguring osd.1 (monmap changed)... Feb 20 04:42:06 localhost ceph-mon[288002]: Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:42:06 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:06 localhost ceph-mon[288002]: Removed label mon from host np0005625199.localdomain Feb 20 04:42:06 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:06 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:06 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:42:07 localhost ceph-mon[288002]: Reconfiguring osd.4 (monmap changed)... Feb 20 04:42:07 localhost ceph-mon[288002]: Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:42:07 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:07 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:07 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:07 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:42:08 localhost ceph-mon[288002]: mon.np0005625202@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:42:08 localhost podman[290391]: 2026-02-20 09:42:08.445914046 +0000 UTC m=+0.082378277 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:42:08 localhost podman[290391]: 2026-02-20 09:42:08.460796305 +0000 UTC m=+0.097260526 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:42:08 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:42:08 localhost ceph-mon[288002]: Removed label mgr from host np0005625199.localdomain Feb 20 04:42:08 localhost ceph-mon[288002]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)... Feb 20 04:42:08 localhost ceph-mon[288002]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain Feb 20 04:42:08 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:08 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:08 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:08 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:09 localhost ceph-mon[288002]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)... Feb 20 04:42:09 localhost ceph-mon[288002]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain Feb 20 04:42:09 localhost ceph-mon[288002]: Removed label _admin from host np0005625199.localdomain Feb 20 04:42:09 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:09 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:09 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:42:10 localhost ceph-mon[288002]: Reconfiguring mon.np0005625203 (monmap changed)... Feb 20 04:42:10 localhost ceph-mon[288002]: Reconfiguring daemon mon.np0005625203 on np0005625203.localdomain Feb 20 04:42:10 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:10 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:10 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:11 localhost ceph-mon[288002]: Reconfiguring crash.np0005625204 (monmap changed)... Feb 20 04:42:11 localhost ceph-mon[288002]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain Feb 20 04:42:11 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:11 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:11 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 20 04:42:12 localhost ceph-mon[288002]: Reconfiguring osd.0 (monmap changed)... Feb 20 04:42:12 localhost ceph-mon[288002]: Reconfiguring daemon osd.0 on np0005625204.localdomain Feb 20 04:42:12 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:12 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:12 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 20 04:42:13 localhost ceph-mon[288002]: mon.np0005625202@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:42:13 localhost ceph-mon[288002]: Reconfiguring osd.3 (monmap changed)... Feb 20 04:42:13 localhost ceph-mon[288002]: Reconfiguring daemon osd.3 on np0005625204.localdomain Feb 20 04:42:13 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:13 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:13 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:42:14 localhost ceph-mon[288002]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)... Feb 20 04:42:14 localhost ceph-mon[288002]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain Feb 20 04:42:14 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:14 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:14 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:15 localhost ceph-mon[288002]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)... Feb 20 04:42:15 localhost ceph-mon[288002]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain Feb 20 04:42:15 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:15 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:15 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:42:16 localhost podman[241347]: time="2026-02-20T09:42:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:42:16 localhost podman[241347]: @ - - [20/Feb/2026:09:42:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:42:16 localhost podman[241347]: @ - - [20/Feb/2026:09:42:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18742 "" "Go-http-client/1.1" Feb 20 04:42:16 localhost ceph-mon[288002]: Reconfiguring mon.np0005625204 (monmap changed)... Feb 20 04:42:16 localhost ceph-mon[288002]: Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain Feb 20 04:42:16 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:16 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:16 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:16 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:16 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:16 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:18 localhost ceph-mon[288002]: mon.np0005625202@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:42:18 localhost sshd[290592]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:42:18 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:18 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:18 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:42:18 localhost ceph-mon[288002]: Removing np0005625199.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:42:18 localhost ceph-mon[288002]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf Feb 20 04:42:18 localhost ceph-mon[288002]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:42:18 localhost ceph-mon[288002]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:42:18 localhost ceph-mon[288002]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:42:18 localhost ceph-mon[288002]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:42:18 localhost ceph-mon[288002]: Removing np0005625199.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:42:18 localhost ceph-mon[288002]: Removing np0005625199.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:42:18 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:18 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:18 localhost ceph-mon[288002]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:42:18 localhost ceph-mon[288002]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:42:18 localhost ceph-mon[288002]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:42:18 localhost ceph-mon[288002]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:42:18 localhost ceph-mon[288002]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: Removing daemon mgr.np0005625199.ileebh from np0005625199.localdomain -- ports [9283, 8765] Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: Added label _no_schedule to host np0005625199.localdomain Feb 20 04:42:20 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:20 localhost ceph-mon[288002]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005625199.localdomain Feb 20 04:42:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:42:20 localhost podman[290736]: 2026-02-20 09:42:20.451805858 +0000 UTC m=+0.088261623 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true) Feb 20 04:42:20 localhost podman[290736]: 2026-02-20 09:42:20.465787264 +0000 UTC m=+0.102243079 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true) Feb 20 04:42:20 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:42:21 localhost sshd[290755]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:42:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:42:21 localhost systemd[1]: tmp-crun.dI5uGa.mount: Deactivated successfully. Feb 20 04:42:21 localhost podman[290756]: 2026-02-20 09:42:21.445636515 +0000 UTC m=+0.083008003 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git) Feb 20 04:42:21 localhost podman[290756]: 2026-02-20 09:42:21.461849301 +0000 UTC m=+0.099220809 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, release=1770267347, version=9.7, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:42:21 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:42:22 localhost ceph-mon[288002]: Removing key for mgr.np0005625199.ileebh Feb 20 04:42:22 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth rm", "entity": "mgr.np0005625199.ileebh"} : dispatch Feb 20 04:42:22 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005625199.ileebh"}]': finished Feb 20 04:42:22 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:22 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:23 localhost ceph-mon[288002]: mon.np0005625202@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:42:23 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:23 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain"} : dispatch Feb 20 04:42:23 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain"}]': finished Feb 20 04:42:23 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:42:23 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:24 localhost sshd[290812]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:42:24 localhost ceph-mon[288002]: Removed host np0005625199.localdomain Feb 20 04:42:24 localhost ceph-mon[288002]: host np0005625199.localdomain `cephadm ls` failed: Cannot decode JSON: #012Traceback (most recent call last):#012 File "/usr/share/ceph/mgr/cephadm/serve.py", line 1540, in _run_cephadm_json#012 return json.loads(''.join(out))#012 File "/lib64/python3.9/json/__init__.py", line 346, in loads#012 return _default_decoder.decode(s)#012 File "/lib64/python3.9/json/decoder.py", line 337, in decode#012 obj, end = self.raw_decode(s, idx=_w(s, 0).end())#012 File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode#012 raise JSONDecodeError("Expecting value", s, err.value) from None#012json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) Feb 20 04:42:24 localhost ceph-mon[288002]: executing refresh((['np0005625199.localdomain', 'np0005625200.localdomain', 'np0005625201.localdomain', 'np0005625202.localdomain', 'np0005625203.localdomain', 'np0005625204.localdomain'],)) failed.#012Traceback (most recent call last):#012 File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work#012 return f(*arg)#012 File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh#012 and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label#012 host = self._get_stored_name(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name#012 self.assert_host(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host#012 raise OrchestratorError('host %s does not exist' % host)#012orchestrator._interface.OrchestratorError: host np0005625199.localdomain does not exist Feb 20 04:42:24 localhost ceph-mon[288002]: Reconfiguring crash.np0005625200 (monmap changed)... Feb 20 04:42:24 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:24 localhost ceph-mon[288002]: Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain Feb 20 04:42:24 localhost sshd[290814]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:42:25 localhost systemd-logind[760]: New session 65 of user tripleo-admin. Feb 20 04:42:25 localhost systemd[1]: Created slice User Slice of UID 1003. Feb 20 04:42:25 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Feb 20 04:42:25 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Feb 20 04:42:25 localhost systemd[1]: Starting User Manager for UID 1003... Feb 20 04:42:25 localhost podman[290816]: 2026-02-20 09:42:25.067799787 +0000 UTC m=+0.094582814 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 20 04:42:25 localhost podman[290816]: 2026-02-20 09:42:25.110033362 +0000 UTC m=+0.136816369 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Feb 20 04:42:25 localhost podman[290817]: 2026-02-20 09:42:25.119075145 +0000 UTC m=+0.141922896 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent) Feb 20 04:42:25 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:42:25 localhost podman[290817]: 2026-02-20 09:42:25.155827753 +0000 UTC m=+0.178675544 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent) Feb 20 04:42:25 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:42:25 localhost systemd[290841]: Queued start job for default target Main User Target. Feb 20 04:42:25 localhost systemd[290841]: Created slice User Application Slice. Feb 20 04:42:25 localhost systemd[290841]: Started Mark boot as successful after the user session has run 2 minutes. Feb 20 04:42:25 localhost systemd[290841]: Started Daily Cleanup of User's Temporary Directories. Feb 20 04:42:25 localhost systemd[290841]: Reached target Paths. Feb 20 04:42:25 localhost systemd[290841]: Reached target Timers. Feb 20 04:42:25 localhost systemd[290841]: Starting D-Bus User Message Bus Socket... Feb 20 04:42:25 localhost systemd[290841]: Starting Create User's Volatile Files and Directories... Feb 20 04:42:25 localhost systemd[290841]: Finished Create User's Volatile Files and Directories. Feb 20 04:42:25 localhost systemd[290841]: Listening on D-Bus User Message Bus Socket. Feb 20 04:42:25 localhost systemd[290841]: Reached target Sockets. Feb 20 04:42:25 localhost systemd[290841]: Reached target Basic System. Feb 20 04:42:25 localhost systemd[290841]: Reached target Main User Target. Feb 20 04:42:25 localhost systemd[290841]: Startup finished in 149ms. Feb 20 04:42:25 localhost systemd[1]: Started User Manager for UID 1003. Feb 20 04:42:25 localhost systemd[1]: Started Session 65 of User tripleo-admin. Feb 20 04:42:25 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:25 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:25 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:42:25 localhost python3[290997]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line= - ip_netmask: 172.18.0.103/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 20 04:42:26 localhost ceph-mon[288002]: Reconfiguring mon.np0005625200 (monmap changed)... Feb 20 04:42:26 localhost ceph-mon[288002]: Reconfiguring daemon mon.np0005625200 on np0005625200.localdomain Feb 20 04:42:26 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:26 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:26 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:26 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:26 localhost python3[291143]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.103/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:42:27 localhost python3[291288]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.103 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 04:42:27 localhost ceph-mon[288002]: Reconfiguring mgr.np0005625200.ypbkax (monmap changed)... Feb 20 04:42:27 localhost ceph-mon[288002]: Reconfiguring daemon mgr.np0005625200.ypbkax on np0005625200.localdomain Feb 20 04:42:27 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:27 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:27 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:42:28 localhost openstack_network_exporter[243776]: ERROR 09:42:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:42:28 localhost openstack_network_exporter[243776]: Feb 20 04:42:28 localhost openstack_network_exporter[243776]: ERROR 09:42:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:42:28 localhost openstack_network_exporter[243776]: Feb 20 04:42:28 localhost ceph-mon[288002]: mon.np0005625202@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:42:28 localhost ceph-mon[288002]: Reconfiguring mon.np0005625201 (monmap changed)... Feb 20 04:42:28 localhost ceph-mon[288002]: Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain Feb 20 04:42:28 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:28 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:28 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:29 localhost ceph-mon[288002]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)... Feb 20 04:42:29 localhost ceph-mon[288002]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain Feb 20 04:42:29 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:29 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:29 localhost ceph-mon[288002]: Reconfiguring crash.np0005625201 (monmap changed)... Feb 20 04:42:29 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:29 localhost ceph-mon[288002]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain Feb 20 04:42:29 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:29 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:29 localhost ceph-mon[288002]: Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:42:29 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:29 localhost ceph-mon[288002]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:42:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:42:29 localhost podman[291325]: 2026-02-20 09:42:29.978107676 +0000 UTC m=+0.086613040 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:42:30 localhost podman[291325]: 2026-02-20 09:42:30.014865064 +0000 UTC m=+0.123370438 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:42:30 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:42:30 localhost podman[291380]: Feb 20 04:42:30 localhost podman[291380]: 2026-02-20 09:42:30.419725308 +0000 UTC m=+0.078440560 container create 0cbfc6e91d63c26d1ac1f8fc201767c303de196a1dda302f32dec6dca9998db5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_greider, vcs-type=git, ceph=True, GIT_CLEAN=True, distribution-scope=public, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, version=7) Feb 20 04:42:30 localhost systemd[1]: Started libpod-conmon-0cbfc6e91d63c26d1ac1f8fc201767c303de196a1dda302f32dec6dca9998db5.scope. Feb 20 04:42:30 localhost systemd[1]: Started libcrun container. Feb 20 04:42:30 localhost podman[291380]: 2026-02-20 09:42:30.388151008 +0000 UTC m=+0.046866260 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:30 localhost podman[291380]: 2026-02-20 09:42:30.494545478 +0000 UTC m=+0.153260690 container init 0cbfc6e91d63c26d1ac1f8fc201767c303de196a1dda302f32dec6dca9998db5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_greider, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, architecture=x86_64, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, release=1770267347, io.openshift.tags=rhceph ceph, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main) Feb 20 04:42:30 localhost podman[291380]: 2026-02-20 09:42:30.511009521 +0000 UTC m=+0.169724733 container start 0cbfc6e91d63c26d1ac1f8fc201767c303de196a1dda302f32dec6dca9998db5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_greider, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, architecture=x86_64, version=7, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:42:30 localhost podman[291380]: 2026-02-20 09:42:30.511319439 +0000 UTC m=+0.170034691 container attach 0cbfc6e91d63c26d1ac1f8fc201767c303de196a1dda302f32dec6dca9998db5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_greider, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , ceph=True, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, vcs-type=git) Feb 20 04:42:30 localhost vibrant_greider[291396]: 167 167 Feb 20 04:42:30 localhost systemd[1]: libpod-0cbfc6e91d63c26d1ac1f8fc201767c303de196a1dda302f32dec6dca9998db5.scope: Deactivated successfully. Feb 20 04:42:30 localhost podman[291380]: 2026-02-20 09:42:30.515782469 +0000 UTC m=+0.174497661 container died 0cbfc6e91d63c26d1ac1f8fc201767c303de196a1dda302f32dec6dca9998db5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_greider, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, architecture=x86_64, release=1770267347, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, name=rhceph, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:42:30 localhost podman[291401]: 2026-02-20 09:42:30.610553107 +0000 UTC m=+0.084416351 container remove 0cbfc6e91d63c26d1ac1f8fc201767c303de196a1dda302f32dec6dca9998db5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_greider, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, vcs-type=git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.) Feb 20 04:42:30 localhost systemd[1]: libpod-conmon-0cbfc6e91d63c26d1ac1f8fc201767c303de196a1dda302f32dec6dca9998db5.scope: Deactivated successfully. Feb 20 04:42:30 localhost systemd[1]: tmp-crun.SwsThS.mount: Deactivated successfully. Feb 20 04:42:30 localhost systemd[1]: var-lib-containers-storage-overlay-1c8b95842440081c2e8d9a1ad3bffb5ad707235d37ffce7f18348ebb8779f06d-merged.mount: Deactivated successfully. Feb 20 04:42:31 localhost podman[291470]: Feb 20 04:42:31 localhost podman[291470]: 2026-02-20 09:42:31.267186739 +0000 UTC m=+0.062105291 container create 80b71b65a16a792b84ed4ae66544ff2db6036af7feebae9c0746e598be14e8f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_jones, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 04:42:31 localhost systemd[1]: Started libpod-conmon-80b71b65a16a792b84ed4ae66544ff2db6036af7feebae9c0746e598be14e8f4.scope. Feb 20 04:42:31 localhost systemd[1]: Started libcrun container. Feb 20 04:42:31 localhost podman[291470]: 2026-02-20 09:42:31.326617296 +0000 UTC m=+0.121535818 container init 80b71b65a16a792b84ed4ae66544ff2db6036af7feebae9c0746e598be14e8f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_jones, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, maintainer=Guillaume Abrioux , release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:42:31 localhost podman[291470]: 2026-02-20 09:42:31.237039728 +0000 UTC m=+0.031958320 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:31 localhost podman[291470]: 2026-02-20 09:42:31.338038823 +0000 UTC m=+0.132957385 container start 80b71b65a16a792b84ed4ae66544ff2db6036af7feebae9c0746e598be14e8f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_jones, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, name=rhceph, release=1770267347, ceph=True) Feb 20 04:42:31 localhost podman[291470]: 2026-02-20 09:42:31.338286239 +0000 UTC m=+0.133204771 container attach 80b71b65a16a792b84ed4ae66544ff2db6036af7feebae9c0746e598be14e8f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_jones, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, version=7, distribution-scope=public, GIT_BRANCH=main, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph) Feb 20 04:42:31 localhost youthful_jones[291485]: 167 167 Feb 20 04:42:31 localhost systemd[1]: libpod-80b71b65a16a792b84ed4ae66544ff2db6036af7feebae9c0746e598be14e8f4.scope: Deactivated successfully. Feb 20 04:42:31 localhost podman[291470]: 2026-02-20 09:42:31.341026443 +0000 UTC m=+0.135945005 container died 80b71b65a16a792b84ed4ae66544ff2db6036af7feebae9c0746e598be14e8f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_jones, vcs-type=git, name=rhceph, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, version=7) Feb 20 04:42:31 localhost podman[291490]: 2026-02-20 09:42:31.433772206 +0000 UTC m=+0.078254003 container remove 80b71b65a16a792b84ed4ae66544ff2db6036af7feebae9c0746e598be14e8f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_jones, architecture=x86_64, GIT_CLEAN=True, release=1770267347, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, name=rhceph) Feb 20 04:42:31 localhost systemd[1]: libpod-conmon-80b71b65a16a792b84ed4ae66544ff2db6036af7feebae9c0746e598be14e8f4.scope: Deactivated successfully. Feb 20 04:42:31 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:31 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:31 localhost ceph-mon[288002]: Reconfiguring osd.2 (monmap changed)... Feb 20 04:42:31 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:42:31 localhost ceph-mon[288002]: Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:42:31 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:31 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:31 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:31 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:42:31 localhost systemd[1]: tmp-crun.RvRp8S.mount: Deactivated successfully. Feb 20 04:42:31 localhost systemd[1]: var-lib-containers-storage-overlay-ab9bf6a44b8f345e4864c75becf68480ad13a0e6a3f4da68f591216cbcf47e10-merged.mount: Deactivated successfully. Feb 20 04:42:32 localhost podman[291567]: Feb 20 04:42:32 localhost podman[291567]: 2026-02-20 09:42:32.212539561 +0000 UTC m=+0.063893707 container create 326dd3bbf4f4720c31cb56e42eae5084830ac50116c08cb82a41589f722c35d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_panini, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, ceph=True, vcs-type=git, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.42.2, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347) Feb 20 04:42:32 localhost systemd[1]: Started libpod-conmon-326dd3bbf4f4720c31cb56e42eae5084830ac50116c08cb82a41589f722c35d3.scope. Feb 20 04:42:32 localhost systemd[1]: Started libcrun container. Feb 20 04:42:32 localhost podman[291567]: 2026-02-20 09:42:32.281260879 +0000 UTC m=+0.132615015 container init 326dd3bbf4f4720c31cb56e42eae5084830ac50116c08cb82a41589f722c35d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_panini, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, release=1770267347, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.component=rhceph-container) Feb 20 04:42:32 localhost podman[291567]: 2026-02-20 09:42:32.182977377 +0000 UTC m=+0.034331523 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:32 localhost podman[291567]: 2026-02-20 09:42:32.294584648 +0000 UTC m=+0.145938784 container start 326dd3bbf4f4720c31cb56e42eae5084830ac50116c08cb82a41589f722c35d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_panini, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, maintainer=Guillaume Abrioux , version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.42.2) Feb 20 04:42:32 localhost podman[291567]: 2026-02-20 09:42:32.294873995 +0000 UTC m=+0.146228131 container attach 326dd3bbf4f4720c31cb56e42eae5084830ac50116c08cb82a41589f722c35d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_panini, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main) Feb 20 04:42:32 localhost silly_panini[291582]: 167 167 Feb 20 04:42:32 localhost systemd[1]: libpod-326dd3bbf4f4720c31cb56e42eae5084830ac50116c08cb82a41589f722c35d3.scope: Deactivated successfully. Feb 20 04:42:32 localhost podman[291567]: 2026-02-20 09:42:32.298007119 +0000 UTC m=+0.149361295 container died 326dd3bbf4f4720c31cb56e42eae5084830ac50116c08cb82a41589f722c35d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_panini, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_BRANCH=main, ceph=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public) Feb 20 04:42:32 localhost podman[291587]: 2026-02-20 09:42:32.394810672 +0000 UTC m=+0.085098489 container remove 326dd3bbf4f4720c31cb56e42eae5084830ac50116c08cb82a41589f722c35d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_panini, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, vcs-type=git, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2) Feb 20 04:42:32 localhost systemd[1]: libpod-conmon-326dd3bbf4f4720c31cb56e42eae5084830ac50116c08cb82a41589f722c35d3.scope: Deactivated successfully. Feb 20 04:42:32 localhost ceph-mon[288002]: Saving service mon spec with placement label:mon Feb 20 04:42:32 localhost ceph-mon[288002]: Reconfiguring osd.5 (monmap changed)... Feb 20 04:42:32 localhost ceph-mon[288002]: Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:42:32 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:32 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:32 localhost ceph-mon[288002]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:42:32 localhost systemd[1]: tmp-crun.eylF0p.mount: Deactivated successfully. Feb 20 04:42:32 localhost systemd[1]: var-lib-containers-storage-overlay-7644613564d594794d60470e8d663c46d3c4bc368dbaa662a321703d024f3e4a-merged.mount: Deactivated successfully. Feb 20 04:42:33 localhost podman[291665]: Feb 20 04:42:33 localhost podman[291665]: 2026-02-20 09:42:33.216698566 +0000 UTC m=+0.076992161 container create bb1202d8651945372b4742c2acff8d30ca236c983146db8c7adb5579c4aa537f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_shockley, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, RELEASE=main, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1770267347) Feb 20 04:42:33 localhost systemd[1]: Started libpod-conmon-bb1202d8651945372b4742c2acff8d30ca236c983146db8c7adb5579c4aa537f.scope. Feb 20 04:42:33 localhost ceph-mon[288002]: mon.np0005625202@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:42:33 localhost systemd[1]: Started libcrun container. Feb 20 04:42:33 localhost podman[291665]: 2026-02-20 09:42:33.186626047 +0000 UTC m=+0.046919662 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:33 localhost podman[291665]: 2026-02-20 09:42:33.292582816 +0000 UTC m=+0.152876401 container init bb1202d8651945372b4742c2acff8d30ca236c983146db8c7adb5579c4aa537f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_shockley, description=Red Hat Ceph Storage 7, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, name=rhceph, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.buildah.version=1.42.2) Feb 20 04:42:33 localhost podman[291665]: 2026-02-20 09:42:33.303252353 +0000 UTC m=+0.163545968 container start bb1202d8651945372b4742c2acff8d30ca236c983146db8c7adb5579c4aa537f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_shockley, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, vcs-type=git, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, release=1770267347) Feb 20 04:42:33 localhost podman[291665]: 2026-02-20 09:42:33.303829558 +0000 UTC m=+0.164123153 container attach bb1202d8651945372b4742c2acff8d30ca236c983146db8c7adb5579c4aa537f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_shockley, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Feb 20 04:42:33 localhost cranky_shockley[291680]: 167 167 Feb 20 04:42:33 localhost systemd[1]: libpod-bb1202d8651945372b4742c2acff8d30ca236c983146db8c7adb5579c4aa537f.scope: Deactivated successfully. Feb 20 04:42:33 localhost podman[291665]: 2026-02-20 09:42:33.317263079 +0000 UTC m=+0.177556674 container died bb1202d8651945372b4742c2acff8d30ca236c983146db8c7adb5579c4aa537f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_shockley, version=7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux , RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:42:33 localhost podman[291685]: 2026-02-20 09:42:33.41253209 +0000 UTC m=+0.085757946 container remove bb1202d8651945372b4742c2acff8d30ca236c983146db8c7adb5579c4aa537f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_shockley, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., ceph=True, release=1770267347, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, build-date=2026-02-09T10:25:24Z, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:42:33 localhost systemd[1]: libpod-conmon-bb1202d8651945372b4742c2acff8d30ca236c983146db8c7adb5579c4aa537f.scope: Deactivated successfully. Feb 20 04:42:33 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x5628569bf080 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Feb 20 04:42:33 localhost ceph-mon[288002]: mon.np0005625202@4(peon) e8 removed from monmap, suicide. Feb 20 04:42:33 localhost podman[291715]: 2026-02-20 09:42:33.727096867 +0000 UTC m=+0.059130151 container died 29aac7b9c05c78e0de9f028449b60b7edbb2dd4f67b10bbf243ea89d24959273 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625202, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1770267347, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vcs-type=git, com.redhat.component=rhceph-container, ceph=True) Feb 20 04:42:33 localhost podman[291715]: 2026-02-20 09:42:33.760407692 +0000 UTC m=+0.092440936 container remove 29aac7b9c05c78e0de9f028449b60b7edbb2dd4f67b10bbf243ea89d24959273 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625202, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, ceph=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:42:33 localhost systemd[1]: var-lib-containers-storage-overlay-6ce5b6a3752104cd11f28b27bf99eac4c81401730ca5a991727a2cb8dd224ca7-merged.mount: Deactivated successfully. Feb 20 04:42:33 localhost systemd[1]: var-lib-containers-storage-overlay-fafcb32677e5b477d44815a1775d6515f3325caf8ed1a33a7ee0ad3a9a456b2a-merged.mount: Deactivated successfully. Feb 20 04:42:34 localhost systemd[1]: ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8@mon.np0005625202.service: Deactivated successfully. Feb 20 04:42:34 localhost systemd[1]: Stopped Ceph mon.np0005625202 for a8557ee9-b55d-5519-942c-cf8f6172f1d8. Feb 20 04:42:34 localhost systemd[1]: ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8@mon.np0005625202.service: Consumed 3.846s CPU time. Feb 20 04:42:34 localhost systemd[1]: Reloading. Feb 20 04:42:34 localhost systemd-sysv-generator[291896]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:42:34 localhost systemd-rc-local-generator[291890]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:42:34 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:34 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:34 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:34 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:42:34 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:34 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:34 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:34 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:38 localhost ceph-mds[283306]: mds.beacon.mds.np0005625202.akhmop missed beacon ack from the monitors Feb 20 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:42:39 localhost podman[291904]: 2026-02-20 09:42:39.444906333 +0000 UTC m=+0.081632956 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:42:39 localhost podman[291904]: 2026-02-20 09:42:39.486002628 +0000 UTC m=+0.122729241 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:42:39 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:42:40 localhost sshd[291927]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:42:43 localhost podman[291982]: Feb 20 04:42:43 localhost podman[291982]: 2026-02-20 09:42:43.404934888 +0000 UTC m=+0.074644168 container create 2e04bd423f05f197592dd40e93c557e4c9f9d9b66ab65cf0096322901eb0db32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_goodall, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, release=1770267347, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., ceph=True) Feb 20 04:42:43 localhost systemd[1]: Started libpod-conmon-2e04bd423f05f197592dd40e93c557e4c9f9d9b66ab65cf0096322901eb0db32.scope. Feb 20 04:42:43 localhost systemd[1]: Started libcrun container. Feb 20 04:42:43 localhost podman[291982]: 2026-02-20 09:42:43.373921224 +0000 UTC m=+0.043630544 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:43 localhost podman[291982]: 2026-02-20 09:42:43.477351685 +0000 UTC m=+0.147060965 container init 2e04bd423f05f197592dd40e93c557e4c9f9d9b66ab65cf0096322901eb0db32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_goodall, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, distribution-scope=public, version=7, name=rhceph, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, RELEASE=main, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:42:43 localhost podman[291982]: 2026-02-20 09:42:43.488176056 +0000 UTC m=+0.157885336 container start 2e04bd423f05f197592dd40e93c557e4c9f9d9b66ab65cf0096322901eb0db32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_goodall, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, release=1770267347, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:42:43 localhost podman[291982]: 2026-02-20 09:42:43.488543336 +0000 UTC m=+0.158252666 container attach 2e04bd423f05f197592dd40e93c557e4c9f9d9b66ab65cf0096322901eb0db32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_goodall, ceph=True, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, version=7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main) Feb 20 04:42:43 localhost systemd[1]: libpod-2e04bd423f05f197592dd40e93c557e4c9f9d9b66ab65cf0096322901eb0db32.scope: Deactivated successfully. Feb 20 04:42:43 localhost dreamy_goodall[291996]: 167 167 Feb 20 04:42:43 localhost podman[291982]: 2026-02-20 09:42:43.491341661 +0000 UTC m=+0.161050951 container died 2e04bd423f05f197592dd40e93c557e4c9f9d9b66ab65cf0096322901eb0db32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_goodall, io.buildah.version=1.42.2, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, vcs-type=git, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:42:43 localhost podman[292001]: 2026-02-20 09:42:43.57987024 +0000 UTC m=+0.079622341 container remove 2e04bd423f05f197592dd40e93c557e4c9f9d9b66ab65cf0096322901eb0db32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_goodall, GIT_BRANCH=main, version=7, build-date=2026-02-09T10:25:24Z, architecture=x86_64, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-type=git, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:42:43 localhost systemd[1]: libpod-conmon-2e04bd423f05f197592dd40e93c557e4c9f9d9b66ab65cf0096322901eb0db32.scope: Deactivated successfully. Feb 20 04:42:44 localhost podman[292068]: Feb 20 04:42:44 localhost podman[292068]: 2026-02-20 09:42:44.280807613 +0000 UTC m=+0.074001340 container create 2ef3b3c327869c73bd32118487a080637a743766cfa03a7be6eadcc2f7894f95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_fermat, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, maintainer=Guillaume Abrioux , ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, architecture=x86_64, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, version=7, release=1770267347) Feb 20 04:42:44 localhost systemd[1]: Started libpod-conmon-2ef3b3c327869c73bd32118487a080637a743766cfa03a7be6eadcc2f7894f95.scope. Feb 20 04:42:44 localhost systemd[1]: Started libcrun container. Feb 20 04:42:44 localhost podman[292068]: 2026-02-20 09:42:44.344431163 +0000 UTC m=+0.137624880 container init 2ef3b3c327869c73bd32118487a080637a743766cfa03a7be6eadcc2f7894f95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_fermat, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:42:44 localhost podman[292068]: 2026-02-20 09:42:44.251423593 +0000 UTC m=+0.044617380 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:44 localhost podman[292068]: 2026-02-20 09:42:44.353793216 +0000 UTC m=+0.146986923 container start 2ef3b3c327869c73bd32118487a080637a743766cfa03a7be6eadcc2f7894f95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_fermat, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Feb 20 04:42:44 localhost podman[292068]: 2026-02-20 09:42:44.354243538 +0000 UTC m=+0.147437295 container attach 2ef3b3c327869c73bd32118487a080637a743766cfa03a7be6eadcc2f7894f95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_fermat, io.buildah.version=1.42.2, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.component=rhceph-container) Feb 20 04:42:44 localhost kind_fermat[292083]: 167 167 Feb 20 04:42:44 localhost systemd[1]: libpod-2ef3b3c327869c73bd32118487a080637a743766cfa03a7be6eadcc2f7894f95.scope: Deactivated successfully. Feb 20 04:42:44 localhost podman[292068]: 2026-02-20 09:42:44.35729819 +0000 UTC m=+0.150491967 container died 2ef3b3c327869c73bd32118487a080637a743766cfa03a7be6eadcc2f7894f95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_fermat, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, maintainer=Guillaume Abrioux , version=7, release=1770267347, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 04:42:44 localhost systemd[1]: var-lib-containers-storage-overlay-c57767731ce8b4338d8b8a4b96abdf905896cf4c6d8c5bae466b8dd62ad22f67-merged.mount: Deactivated successfully. Feb 20 04:42:44 localhost systemd[1]: var-lib-containers-storage-overlay-bd92a26d598586ea3250f460831b434b25ff26543fe05a970c57071b3fa23c50-merged.mount: Deactivated successfully. Feb 20 04:42:44 localhost podman[292088]: 2026-02-20 09:42:44.455623823 +0000 UTC m=+0.090674509 container remove 2ef3b3c327869c73bd32118487a080637a743766cfa03a7be6eadcc2f7894f95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_fermat, GIT_CLEAN=True, vendor=Red Hat, Inc., ceph=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, name=rhceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:42:44 localhost systemd[1]: libpod-conmon-2ef3b3c327869c73bd32118487a080637a743766cfa03a7be6eadcc2f7894f95.scope: Deactivated successfully. Feb 20 04:42:44 localhost sshd[292129]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:42:45 localhost podman[292165]: Feb 20 04:42:45 localhost podman[292165]: 2026-02-20 09:42:45.269913613 +0000 UTC m=+0.068827822 container create 3656c69c775181f7d4d5f60c3f025113c1e302d74f2948cc03a88e2210d08c9f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_keldysh, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, vcs-type=git, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:42:45 localhost systemd[1]: Started libpod-conmon-3656c69c775181f7d4d5f60c3f025113c1e302d74f2948cc03a88e2210d08c9f.scope. Feb 20 04:42:45 localhost systemd[1]: Started libcrun container. Feb 20 04:42:45 localhost podman[292165]: 2026-02-20 09:42:45.334687484 +0000 UTC m=+0.133601693 container init 3656c69c775181f7d4d5f60c3f025113c1e302d74f2948cc03a88e2210d08c9f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_keldysh, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, vendor=Red Hat, Inc., GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, distribution-scope=public, RELEASE=main, io.buildah.version=1.42.2, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:42:45 localhost podman[292165]: 2026-02-20 09:42:45.238065356 +0000 UTC m=+0.036979615 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:45 localhost podman[292165]: 2026-02-20 09:42:45.343953233 +0000 UTC m=+0.142867442 container start 3656c69c775181f7d4d5f60c3f025113c1e302d74f2948cc03a88e2210d08c9f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_keldysh, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, ceph=True, io.openshift.expose-services=, GIT_CLEAN=True) Feb 20 04:42:45 localhost podman[292165]: 2026-02-20 09:42:45.344138538 +0000 UTC m=+0.143052757 container attach 3656c69c775181f7d4d5f60c3f025113c1e302d74f2948cc03a88e2210d08c9f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_keldysh, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, distribution-scope=public, GIT_CLEAN=True) Feb 20 04:42:45 localhost blissful_keldysh[292180]: 167 167 Feb 20 04:42:45 localhost systemd[1]: libpod-3656c69c775181f7d4d5f60c3f025113c1e302d74f2948cc03a88e2210d08c9f.scope: Deactivated successfully. Feb 20 04:42:45 localhost podman[292165]: 2026-02-20 09:42:45.346947823 +0000 UTC m=+0.145862052 container died 3656c69c775181f7d4d5f60c3f025113c1e302d74f2948cc03a88e2210d08c9f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_keldysh, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , distribution-scope=public, CEPH_POINT_RELEASE=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph) Feb 20 04:42:45 localhost systemd[1]: var-lib-containers-storage-overlay-ef187da5e3b4e842d9d78aac0315634a59e603f9c19a025b4f91c0ac78fad337-merged.mount: Deactivated successfully. Feb 20 04:42:45 localhost podman[292185]: 2026-02-20 09:42:45.434396884 +0000 UTC m=+0.077841594 container remove 3656c69c775181f7d4d5f60c3f025113c1e302d74f2948cc03a88e2210d08c9f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_keldysh, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, ceph=True, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, RELEASE=main, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux ) Feb 20 04:42:45 localhost systemd[1]: libpod-conmon-3656c69c775181f7d4d5f60c3f025113c1e302d74f2948cc03a88e2210d08c9f.scope: Deactivated successfully. Feb 20 04:42:45 localhost nova_compute[280804]: 2026-02-20 09:42:45.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:42:45 localhost nova_compute[280804]: 2026-02-20 09:42:45.535 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:42:45 localhost nova_compute[280804]: 2026-02-20 09:42:45.536 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:42:45 localhost nova_compute[280804]: 2026-02-20 09:42:45.536 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:42:45 localhost nova_compute[280804]: 2026-02-20 09:42:45.537 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:42:45 localhost nova_compute[280804]: 2026-02-20 09:42:45.537 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.010 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:42:46 localhost podman[241347]: time="2026-02-20T09:42:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:42:46 localhost podman[241347]: @ - - [20/Feb/2026:09:42:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155393 "" "Go-http-client/1.1" Feb 20 04:42:46 localhost podman[241347]: @ - - [20/Feb/2026:09:42:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18257 "" "Go-http-client/1.1" Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.172 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.173 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=12043MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.174 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.175 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.249 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.250 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.296 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:42:46 localhost podman[292285]: Feb 20 04:42:46 localhost podman[292285]: 2026-02-20 09:42:46.311876102 +0000 UTC m=+0.064286248 container create 235834a2d258472401d0e9fff1ce73d5508bacec9d95f6cb9171c574147cff01 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_jennings, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, version=7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z) Feb 20 04:42:46 localhost systemd[1]: Started libpod-conmon-235834a2d258472401d0e9fff1ce73d5508bacec9d95f6cb9171c574147cff01.scope. Feb 20 04:42:46 localhost systemd[1]: Started libcrun container. Feb 20 04:42:46 localhost podman[292285]: 2026-02-20 09:42:46.378463652 +0000 UTC m=+0.130873798 container init 235834a2d258472401d0e9fff1ce73d5508bacec9d95f6cb9171c574147cff01 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_jennings, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.openshift.expose-services=) Feb 20 04:42:46 localhost podman[292285]: 2026-02-20 09:42:46.288989207 +0000 UTC m=+0.041399363 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:46 localhost podman[292285]: 2026-02-20 09:42:46.392432198 +0000 UTC m=+0.144842374 container start 235834a2d258472401d0e9fff1ce73d5508bacec9d95f6cb9171c574147cff01 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_jennings, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, distribution-scope=public, build-date=2026-02-09T10:25:24Z, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, maintainer=Guillaume Abrioux , architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=) Feb 20 04:42:46 localhost podman[292285]: 2026-02-20 09:42:46.392731166 +0000 UTC m=+0.145141312 container attach 235834a2d258472401d0e9fff1ce73d5508bacec9d95f6cb9171c574147cff01 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_jennings, name=rhceph, version=7, com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux ) Feb 20 04:42:46 localhost quirky_jennings[292301]: 167 167 Feb 20 04:42:46 localhost systemd[1]: libpod-235834a2d258472401d0e9fff1ce73d5508bacec9d95f6cb9171c574147cff01.scope: Deactivated successfully. Feb 20 04:42:46 localhost podman[292285]: 2026-02-20 09:42:46.395707156 +0000 UTC m=+0.148117302 container died 235834a2d258472401d0e9fff1ce73d5508bacec9d95f6cb9171c574147cff01 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_jennings, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, architecture=x86_64, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, release=1770267347, version=7, name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container) Feb 20 04:42:46 localhost systemd[1]: var-lib-containers-storage-overlay-f59078b02e372160045fa0d99dc16f2802c708fbcab74c7f6ad397c0ec5cbef6-merged.mount: Deactivated successfully. Feb 20 04:42:46 localhost podman[292306]: 2026-02-20 09:42:46.498277003 +0000 UTC m=+0.089559838 container remove 235834a2d258472401d0e9fff1ce73d5508bacec9d95f6cb9171c574147cff01 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_jennings, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64) Feb 20 04:42:46 localhost systemd[1]: libpod-conmon-235834a2d258472401d0e9fff1ce73d5508bacec9d95f6cb9171c574147cff01.scope: Deactivated successfully. Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.769 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.776 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.798 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.800 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:42:46 localhost nova_compute[280804]: 2026-02-20 09:42:46.800 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:42:47 localhost podman[292395]: Feb 20 04:42:47 localhost podman[292395]: 2026-02-20 09:42:47.127693604 +0000 UTC m=+0.068344949 container create 9fcfc9d758d7842dabf922351140188e4330b90cd18c4f6fb4dbed5111ad0fd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_cori, RELEASE=main, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=1770267347) Feb 20 04:42:47 localhost systemd[1]: Started libpod-conmon-9fcfc9d758d7842dabf922351140188e4330b90cd18c4f6fb4dbed5111ad0fd4.scope. Feb 20 04:42:47 localhost systemd[1]: Started libcrun container. Feb 20 04:42:47 localhost podman[292395]: 2026-02-20 09:42:47.190160893 +0000 UTC m=+0.130812228 container init 9fcfc9d758d7842dabf922351140188e4330b90cd18c4f6fb4dbed5111ad0fd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_cori, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., architecture=x86_64, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7) Feb 20 04:42:47 localhost podman[292395]: 2026-02-20 09:42:47.094029509 +0000 UTC m=+0.034680904 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:47 localhost podman[292395]: 2026-02-20 09:42:47.199043051 +0000 UTC m=+0.139694386 container start 9fcfc9d758d7842dabf922351140188e4330b90cd18c4f6fb4dbed5111ad0fd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_cori, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, ceph=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=7) Feb 20 04:42:47 localhost podman[292395]: 2026-02-20 09:42:47.199317058 +0000 UTC m=+0.139968393 container attach 9fcfc9d758d7842dabf922351140188e4330b90cd18c4f6fb4dbed5111ad0fd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_cori, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Feb 20 04:42:47 localhost hardcore_cori[292411]: 167 167 Feb 20 04:42:47 localhost systemd[1]: libpod-9fcfc9d758d7842dabf922351140188e4330b90cd18c4f6fb4dbed5111ad0fd4.scope: Deactivated successfully. Feb 20 04:42:47 localhost podman[292395]: 2026-02-20 09:42:47.20236116 +0000 UTC m=+0.143012525 container died 9fcfc9d758d7842dabf922351140188e4330b90cd18c4f6fb4dbed5111ad0fd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_cori, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, architecture=x86_64, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:42:47 localhost podman[292416]: 2026-02-20 09:42:47.300424456 +0000 UTC m=+0.088990472 container remove 9fcfc9d758d7842dabf922351140188e4330b90cd18c4f6fb4dbed5111ad0fd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_cori, architecture=x86_64, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, ceph=True, release=1770267347, CEPH_POINT_RELEASE=, GIT_BRANCH=main) Feb 20 04:42:47 localhost systemd[1]: libpod-conmon-9fcfc9d758d7842dabf922351140188e4330b90cd18c4f6fb4dbed5111ad0fd4.scope: Deactivated successfully. Feb 20 04:42:47 localhost systemd[1]: var-lib-containers-storage-overlay-666ef45f6c385386d965b40154dc9ca5fec71426461bfddbb8c3876fc6fb2e17-merged.mount: Deactivated successfully. Feb 20 04:42:47 localhost nova_compute[280804]: 2026-02-20 09:42:47.802 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:42:47 localhost nova_compute[280804]: 2026-02-20 09:42:47.803 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:42:47 localhost nova_compute[280804]: 2026-02-20 09:42:47.803 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:42:47 localhost nova_compute[280804]: 2026-02-20 09:42:47.822 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:42:47 localhost nova_compute[280804]: 2026-02-20 09:42:47.822 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:42:48 localhost nova_compute[280804]: 2026-02-20 09:42:48.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:42:48 localhost nova_compute[280804]: 2026-02-20 09:42:48.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:42:48 localhost nova_compute[280804]: 2026-02-20 09:42:48.512 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:42:49 localhost nova_compute[280804]: 2026-02-20 09:42:49.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:42:49 localhost nova_compute[280804]: 2026-02-20 09:42:49.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:42:50 localhost podman[292509]: Feb 20 04:42:50 localhost podman[292509]: 2026-02-20 09:42:50.340809529 +0000 UTC m=+0.075264515 container create 66991a1d6607f2502584a20671fe5efdb2e1d5cee1acbe3f8158f0aac2081e43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_heisenberg, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:42:50 localhost systemd[1]: Started libpod-conmon-66991a1d6607f2502584a20671fe5efdb2e1d5cee1acbe3f8158f0aac2081e43.scope. Feb 20 04:42:50 localhost systemd[1]: Started libcrun container. Feb 20 04:42:50 localhost podman[292509]: 2026-02-20 09:42:50.4052162 +0000 UTC m=+0.139671196 container init 66991a1d6607f2502584a20671fe5efdb2e1d5cee1acbe3f8158f0aac2081e43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_heisenberg, io.openshift.tags=rhceph ceph, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, io.openshift.expose-services=, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_BRANCH=main, ceph=True, RELEASE=main) Feb 20 04:42:50 localhost podman[292509]: 2026-02-20 09:42:50.310162495 +0000 UTC m=+0.044617491 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:50 localhost podman[292509]: 2026-02-20 09:42:50.415411653 +0000 UTC m=+0.149866629 container start 66991a1d6607f2502584a20671fe5efdb2e1d5cee1acbe3f8158f0aac2081e43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_heisenberg, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, name=rhceph, ceph=True) Feb 20 04:42:50 localhost podman[292509]: 2026-02-20 09:42:50.415761443 +0000 UTC m=+0.150216489 container attach 66991a1d6607f2502584a20671fe5efdb2e1d5cee1acbe3f8158f0aac2081e43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_heisenberg, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., release=1770267347, vcs-type=git, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, distribution-scope=public, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:42:50 localhost awesome_heisenberg[292524]: 167 167 Feb 20 04:42:50 localhost systemd[1]: libpod-66991a1d6607f2502584a20671fe5efdb2e1d5cee1acbe3f8158f0aac2081e43.scope: Deactivated successfully. Feb 20 04:42:50 localhost podman[292509]: 2026-02-20 09:42:50.419567456 +0000 UTC m=+0.154022452 container died 66991a1d6607f2502584a20671fe5efdb2e1d5cee1acbe3f8158f0aac2081e43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_heisenberg, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, ceph=True, RELEASE=main, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public) Feb 20 04:42:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:42:50 localhost nova_compute[280804]: 2026-02-20 09:42:50.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:42:50 localhost nova_compute[280804]: 2026-02-20 09:42:50.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:42:50 localhost podman[292529]: 2026-02-20 09:42:50.515186636 +0000 UTC m=+0.083062104 container remove 66991a1d6607f2502584a20671fe5efdb2e1d5cee1acbe3f8158f0aac2081e43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_heisenberg, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, maintainer=Guillaume Abrioux , architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, name=rhceph, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=1770267347, RELEASE=main) Feb 20 04:42:50 localhost systemd[1]: libpod-conmon-66991a1d6607f2502584a20671fe5efdb2e1d5cee1acbe3f8158f0aac2081e43.scope: Deactivated successfully. Feb 20 04:42:50 localhost podman[292541]: 2026-02-20 09:42:50.605471863 +0000 UTC m=+0.096354201 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Feb 20 04:42:50 localhost podman[292541]: 2026-02-20 09:42:50.615685268 +0000 UTC m=+0.106567586 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute) Feb 20 04:42:50 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:42:50 localhost podman[292553]: Feb 20 04:42:50 localhost podman[292553]: 2026-02-20 09:42:50.687420645 +0000 UTC m=+0.133283493 container create 98f31b392953786dc7c1f83d271d27c9edbb62a0fcc610d15c7897f171d0aaae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_blackwell, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, RELEASE=main, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, maintainer=Guillaume Abrioux , vcs-type=git, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=) Feb 20 04:42:50 localhost podman[292553]: 2026-02-20 09:42:50.607011125 +0000 UTC m=+0.052873963 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:50 localhost systemd[1]: Started libpod-conmon-98f31b392953786dc7c1f83d271d27c9edbb62a0fcc610d15c7897f171d0aaae.scope. Feb 20 04:42:50 localhost systemd[1]: Started libcrun container. Feb 20 04:42:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6789ef2ffdd08af2c103e360e7487af66ed7ca62d3ebc2237b599f7c9a6ba5/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Feb 20 04:42:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6789ef2ffdd08af2c103e360e7487af66ed7ca62d3ebc2237b599f7c9a6ba5/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Feb 20 04:42:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6789ef2ffdd08af2c103e360e7487af66ed7ca62d3ebc2237b599f7c9a6ba5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 04:42:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6789ef2ffdd08af2c103e360e7487af66ed7ca62d3ebc2237b599f7c9a6ba5/merged/var/lib/ceph/mon/ceph-np0005625202 supports timestamps until 2038 (0x7fffffff) Feb 20 04:42:50 localhost podman[292553]: 2026-02-20 09:42:50.751530729 +0000 UTC m=+0.197393567 container init 98f31b392953786dc7c1f83d271d27c9edbb62a0fcc610d15c7897f171d0aaae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_blackwell, release=1770267347, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, name=rhceph, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True) Feb 20 04:42:50 localhost podman[292553]: 2026-02-20 09:42:50.766678357 +0000 UTC m=+0.212541195 container start 98f31b392953786dc7c1f83d271d27c9edbb62a0fcc610d15c7897f171d0aaae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_blackwell, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, maintainer=Guillaume Abrioux , name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, architecture=x86_64) Feb 20 04:42:50 localhost podman[292553]: 2026-02-20 09:42:50.767019946 +0000 UTC m=+0.212882824 container attach 98f31b392953786dc7c1f83d271d27c9edbb62a0fcc610d15c7897f171d0aaae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_blackwell, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main, release=1770267347, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, ceph=True, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:42:50 localhost systemd[1]: libpod-98f31b392953786dc7c1f83d271d27c9edbb62a0fcc610d15c7897f171d0aaae.scope: Deactivated successfully. Feb 20 04:42:50 localhost podman[292553]: 2026-02-20 09:42:50.853810239 +0000 UTC m=+0.299673087 container died 98f31b392953786dc7c1f83d271d27c9edbb62a0fcc610d15c7897f171d0aaae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_blackwell, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True) Feb 20 04:42:50 localhost podman[292603]: 2026-02-20 09:42:50.94461397 +0000 UTC m=+0.078462911 container remove 98f31b392953786dc7c1f83d271d27c9edbb62a0fcc610d15c7897f171d0aaae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_blackwell, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vcs-type=git, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7) Feb 20 04:42:50 localhost systemd[1]: libpod-conmon-98f31b392953786dc7c1f83d271d27c9edbb62a0fcc610d15c7897f171d0aaae.scope: Deactivated successfully. Feb 20 04:42:50 localhost systemd[1]: Reloading. Feb 20 04:42:51 localhost systemd-rc-local-generator[292640]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:42:51 localhost systemd-sysv-generator[292644]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: var-lib-containers-storage-overlay-8e63b1c7a00626d109279c0976fd2640b11e465b5aaa821a782aa40a07b0c46a-merged.mount: Deactivated successfully. Feb 20 04:42:51 localhost systemd[1]: Reloading. Feb 20 04:42:51 localhost systemd-rc-local-generator[292686]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 20 04:42:51 localhost systemd-sysv-generator[292690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Feb 20 04:42:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:42:51 localhost systemd[1]: Starting Ceph mon.np0005625202 for a8557ee9-b55d-5519-942c-cf8f6172f1d8... Feb 20 04:42:51 localhost podman[292696]: 2026-02-20 09:42:51.78235847 +0000 UTC m=+0.092247781 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, container_name=openstack_network_exporter, config_id=openstack_network_exporter, distribution-scope=public, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 04:42:51 localhost podman[292696]: 2026-02-20 09:42:51.799935533 +0000 UTC m=+0.109824874 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, version=9.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, name=ubi9/ubi-minimal, vendor=Red Hat, Inc.) Feb 20 04:42:51 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:42:52 localhost podman[292768]: Feb 20 04:42:52 localhost podman[292768]: 2026-02-20 09:42:52.095273852 +0000 UTC m=+0.071969936 container create bd908c902c424b65e39e4c1a4c1e68900c929bcb7398b310fa9f666109a952eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625202, version=7, RELEASE=main, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, release=1770267347, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:42:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a76fd8b79b31ead894028535ff7d009d6e6d53321e748b2bad70ec3e4279de7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Feb 20 04:42:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a76fd8b79b31ead894028535ff7d009d6e6d53321e748b2bad70ec3e4279de7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Feb 20 04:42:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a76fd8b79b31ead894028535ff7d009d6e6d53321e748b2bad70ec3e4279de7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Feb 20 04:42:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a76fd8b79b31ead894028535ff7d009d6e6d53321e748b2bad70ec3e4279de7/merged/var/lib/ceph/mon/ceph-np0005625202 supports timestamps until 2038 (0x7fffffff) Feb 20 04:42:52 localhost podman[292768]: 2026-02-20 09:42:52.143047676 +0000 UTC m=+0.119743770 container init bd908c902c424b65e39e4c1a4c1e68900c929bcb7398b310fa9f666109a952eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625202, release=1770267347, GIT_BRANCH=main, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:42:52 localhost podman[292768]: 2026-02-20 09:42:52.151044501 +0000 UTC m=+0.127740595 container start bd908c902c424b65e39e4c1a4c1e68900c929bcb7398b310fa9f666109a952eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625202, RELEASE=main, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, release=1770267347, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, ceph=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:42:52 localhost bash[292768]: bd908c902c424b65e39e4c1a4c1e68900c929bcb7398b310fa9f666109a952eb Feb 20 04:42:52 localhost podman[292768]: 2026-02-20 09:42:52.066987972 +0000 UTC m=+0.043684136 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:42:52 localhost systemd[1]: Started Ceph mon.np0005625202 for a8557ee9-b55d-5519-942c-cf8f6172f1d8. Feb 20 04:42:52 localhost ceph-mon[292786]: set uid:gid to 167:167 (ceph:ceph) Feb 20 04:42:52 localhost ceph-mon[292786]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mon, pid 2 Feb 20 04:42:52 localhost ceph-mon[292786]: pidfile_write: ignore empty --pid-file Feb 20 04:42:52 localhost ceph-mon[292786]: load: jerasure load: lrc Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: RocksDB version: 7.9.2 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Git sha 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Compile date 2026-02-06 00:00:00 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: DB SUMMARY Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: DB Session ID: 54EDA52XUT1SDV7DF7Y7 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: CURRENT file: CURRENT Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: IDENTITY file: IDENTITY Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005625202/store.db dir, Total Num: 0, files: Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005625202/store.db: 000004.log size: 886 ; Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.error_if_exists: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.create_if_missing: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.paranoid_checks: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.flush_verify_memtable_count: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.env: 0x55a9b66dba20 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.fs: PosixFileSystem Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.info_log: 0x55a9b723ed20 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_file_opening_threads: 16 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.statistics: (nil) Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.use_fsync: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_log_file_size: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_manifest_file_size: 1073741824 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.log_file_time_to_roll: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.keep_log_file_num: 1000 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.recycle_log_file_num: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.allow_fallocate: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.allow_mmap_reads: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.allow_mmap_writes: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.use_direct_reads: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.create_missing_column_families: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.db_log_dir: Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.wal_dir: Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.table_cache_numshardbits: 6 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.WAL_ttl_seconds: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.WAL_size_limit_MB: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.manifest_preallocation_size: 4194304 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.is_fd_close_on_exec: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.advise_random_on_open: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.db_write_buffer_size: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.write_buffer_manager: 0x55a9b724f540 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.access_hint_on_compaction_start: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.random_access_max_buffer_size: 1048576 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.use_adaptive_mutex: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.rate_limiter: (nil) Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.wal_recovery_mode: 2 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.enable_thread_tracking: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.enable_pipelined_write: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.unordered_write: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.allow_concurrent_memtable_write: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.write_thread_max_yield_usec: 100 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.write_thread_slow_yield_usec: 3 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.row_cache: None Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.wal_filter: None Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.avoid_flush_during_recovery: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.allow_ingest_behind: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.two_write_queues: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.manual_wal_flush: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.wal_compression: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.atomic_flush: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.persist_stats_to_disk: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.write_dbid_to_manifest: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.log_readahead_size: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.file_checksum_gen_factory: Unknown Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.best_efforts_recovery: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.allow_data_in_errors: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.db_host_id: __hostname__ Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.enforce_single_del_contracts: true Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_background_jobs: 2 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_background_compactions: -1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_subcompactions: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.avoid_flush_during_shutdown: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.delayed_write_rate : 16777216 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_total_wal_size: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.stats_dump_period_sec: 600 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.stats_persist_period_sec: 600 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.stats_history_buffer_size: 1048576 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_open_files: -1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bytes_per_sync: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.wal_bytes_per_sync: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.strict_bytes_per_sync: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_readahead_size: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_background_flushes: -1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Compression algorithms supported: Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: #011kZSTD supported: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: #011kXpressCompression supported: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: #011kBZip2Compression supported: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: #011kLZ4Compression supported: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: #011kZlibCompression supported: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: #011kLZ4HCCompression supported: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: #011kSnappyCompression supported: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Fast CRC32 supported: Supported on x86 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: DMutex implementation: pthread_mutex_t Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005625202/store.db/MANIFEST-000005 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.comparator: leveldb.BytewiseComparator Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.merge_operator: Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_filter: None Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_filter_factory: None Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.sst_partitioner_factory: None Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.memtable_factory: SkipListFactory Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.table_factory: BlockBasedTable Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a9b723e980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55a9b723b350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.write_buffer_size: 33554432 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_write_buffer_number: 2 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compression: NoCompression Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bottommost_compression: Disabled Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.prefix_extractor: nullptr Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.num_levels: 7 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bottommost_compression_opts.level: 32767 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bottommost_compression_opts.enabled: false Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compression_opts.window_bits: -14 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compression_opts.level: 32767 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compression_opts.strategy: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compression_opts.parallel_threads: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compression_opts.enabled: false Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.level0_stop_writes_trigger: 36 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.target_file_size_base: 67108864 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.target_file_size_multiplier: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_bytes_for_level_base: 268435456 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_compaction_bytes: 1677721600 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.arena_block_size: 1048576 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.disable_auto_compactions: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_style: kCompactionStyleLevel Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.table_properties_collectors: Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.inplace_update_support: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.inplace_update_num_locks: 10000 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.memtable_whole_key_filtering: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.memtable_huge_page_size: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.bloom_locality: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.max_successive_merges: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.optimize_filters_for_hits: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.paranoid_file_checks: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.force_consistency_checks: 1 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.report_bg_io_stats: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.ttl: 2592000 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.periodic_compaction_seconds: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.preclude_last_level_data_seconds: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.preserve_internal_time_seconds: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.enable_blob_files: false Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.min_blob_size: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.blob_file_size: 268435456 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.blob_compression_type: NoCompression Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.enable_blob_garbage_collection: false Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.blob_compaction_readahead_size: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.blob_file_starting_level: 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005625202/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a5b0c71e-1a28-4ac7-8b68-08edb74002f2 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580572199809, "job": 1, "event": "recovery_started", "wal_files": [4]} Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580572202435, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 2012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 898, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 776, "raw_average_value_size": 155, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580572202542, "job": 1, "event": "recovery_finished"} Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a9b7262e00 Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: DB pointer 0x55a9b7358000 Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625202 does not exist in monmap, will attempt to join an existing cluster Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:42:52 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.96 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Sum 1/0 1.96 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a9b723b350#2 capacity: 512.00 MB usage: 1.30 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,1.08 KB,0.000205636%)#012#012** File Read Latency Histogram By Level [default] ** Feb 20 04:42:52 localhost ceph-mon[292786]: using public_addr v2:172.18.0.103:0/0 -> [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] Feb 20 04:42:52 localhost ceph-mon[292786]: starting mon.np0005625202 rank -1 at public addrs [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] at bind addrs [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005625202 fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625202@-1(???) e0 preinit fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625202@-1(synchronizing) e8 sync_obtain_latest_monmap Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625202@-1(synchronizing) e8 sync_obtain_latest_monmap obtained monmap e8 Feb 20 04:42:52 localhost systemd[1]: tmp-crun.6dUwzq.mount: Deactivated successfully. Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625202@-1(synchronizing).mds e17 new map Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625202@-1(synchronizing).mds e17 print_map#012e17#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01116#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-20T07:58:28.398421+0000#012modified#0112026-02-20T09:40:14.722031+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01183#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26854}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26854 members: 26854#012[mds.mds.np0005625203.zsrwgk{0:26854} state up:active seq 13 addr [v2:172.18.0.107:6808/3334119751,v1:172.18.0.107:6809/3334119751] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005625202.akhmop{-1:17124} state up:standby seq 1 addr [v2:172.18.0.106:6808/3865978972,v1:172.18.0.106:6809/3865978972] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005625204.wnsphl{-1:26848} state up:standby seq 1 addr [v2:172.18.0.108:6808/2508223371,v1:172.18.0.108:6809/2508223371] compat {c=[1],r=[1],i=[17ff]}] Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625202@-1(synchronizing).osd e85 crush map has features 3314933000852226048, adjusting msgr requires Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625202@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625202@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625202@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mon.np0005625202 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring osd.1 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Removed label mon from host np0005625199.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring osd.4 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Removed label mgr from host np0005625199.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: Removed label _admin from host np0005625199.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mon.np0005625203 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625203 on np0005625203.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring crash.np0005625204 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring osd.0 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon osd.0 on np0005625204.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring osd.3 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon osd.3 on np0005625204.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mon.np0005625204 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Removing np0005625199.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:42:52 localhost ceph-mon[292786]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf Feb 20 04:42:52 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:42:52 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:42:52 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:42:52 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:42:52 localhost ceph-mon[292786]: Removing np0005625199.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:42:52 localhost ceph-mon[292786]: Removing np0005625199.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:42:52 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:42:52 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:42:52 localhost ceph-mon[292786]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:42:52 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Removing daemon mgr.np0005625199.ileebh from np0005625199.localdomain -- ports [9283, 8765] Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Added label _no_schedule to host np0005625199.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005625199.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: Removing key for mgr.np0005625199.ileebh Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth rm", "entity": "mgr.np0005625199.ileebh"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005625199.ileebh"}]': finished Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain"}]': finished Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Removed host np0005625199.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: host np0005625199.localdomain `cephadm ls` failed: Cannot decode JSON: #012Traceback (most recent call last):#012 File "/usr/share/ceph/mgr/cephadm/serve.py", line 1540, in _run_cephadm_json#012 return json.loads(''.join(out))#012 File "/lib64/python3.9/json/__init__.py", line 346, in loads#012 return _default_decoder.decode(s)#012 File "/lib64/python3.9/json/decoder.py", line 337, in decode#012 obj, end = self.raw_decode(s, idx=_w(s, 0).end())#012 File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode#012 raise JSONDecodeError("Expecting value", s, err.value) from None#012json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) Feb 20 04:42:52 localhost ceph-mon[292786]: executing refresh((['np0005625199.localdomain', 'np0005625200.localdomain', 'np0005625201.localdomain', 'np0005625202.localdomain', 'np0005625203.localdomain', 'np0005625204.localdomain'],)) failed.#012Traceback (most recent call last):#012 File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work#012 return f(*arg)#012 File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh#012 and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label#012 host = self._get_stored_name(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name#012 self.assert_host(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host#012 raise OrchestratorError('host %s does not exist' % host)#012orchestrator._interface.OrchestratorError: host np0005625199.localdomain does not exist Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring crash.np0005625200 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mon.np0005625200 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625200 on np0005625200.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625200.ypbkax (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625200.ypbkax on np0005625200.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mon.np0005625201 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring crash.np0005625201 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring osd.2 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Saving service mon spec with placement label:mon Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring osd.5 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Remove daemons mon.np0005625202 Feb 20 04:42:52 localhost ceph-mon[292786]: Safe to remove mon.np0005625202: new quorum should be ['np0005625201', 'np0005625200', 'np0005625204', 'np0005625203'] (from ['np0005625201', 'np0005625200', 'np0005625204', 'np0005625203']) Feb 20 04:42:52 localhost ceph-mon[292786]: Removing monitor np0005625202 from monmap... Feb 20 04:42:52 localhost ceph-mon[292786]: Removing daemon mon.np0005625202 from np0005625202.localdomain -- ports [] Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625200 calling monitor election Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625201 calling monitor election Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625203 calling monitor election Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625204 calling monitor election Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring crash.np0005625200 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625201 is new leader, mons np0005625201,np0005625200,np0005625203 in quorum (ranks 0,1,3) Feb 20 04:42:52 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625201 calling monitor election Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625200 calling monitor election Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625201 is new leader, mons np0005625201,np0005625200,np0005625204,np0005625203 in quorum (ranks 0,1,2,3) Feb 20 04:42:52 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625200.ypbkax (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625200.ypbkax on np0005625200.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring crash.np0005625201 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring osd.2 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring osd.5 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring osd.1 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring osd.4 (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: Deploying daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)... Feb 20 04:42:52 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:52 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:42:52 localhost ceph-mon[292786]: mon.np0005625202@-1(synchronizing).paxosservice(auth 1..36) refresh upgraded, format 0 -> 3 Feb 20 04:42:52 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x5628604ec000 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Feb 20 04:42:54 localhost ceph-mon[292786]: mon.np0005625202@-1(probing) e9 my rank is now 4 (was -1) Feb 20 04:42:54 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : mon.np0005625202 calling monitor election Feb 20 04:42:54 localhost ceph-mon[292786]: paxos.4).electionLogic(0) init, first boot, initializing epoch at 1 Feb 20 04:42:54 localhost ceph-mon[292786]: mon.np0005625202@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:42:55 localhost podman[292826]: 2026-02-20 09:42:55.450527968 +0000 UTC m=+0.085420558 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:42:55 localhost podman[292825]: 2026-02-20 09:42:55.519127481 +0000 UTC m=+0.155392958 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:42:55 localhost podman[292826]: 2026-02-20 09:42:55.54323302 +0000 UTC m=+0.178125650 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 20 04:42:55 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:42:55 localhost podman[292825]: 2026-02-20 09:42:55.566842894 +0000 UTC m=+0.203108361 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:42:55 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:42:58 localhost openstack_network_exporter[243776]: ERROR 09:42:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:42:58 localhost openstack_network_exporter[243776]: Feb 20 04:42:58 localhost openstack_network_exporter[243776]: ERROR 09:42:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:42:58 localhost openstack_network_exporter[243776]: Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625202@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625202@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625202@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Feb 20 04:42:58 localhost ceph-mon[292786]: Reconfiguring crash.np0005625204 (monmap changed)... Feb 20 04:42:58 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625201 calling monitor election Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625200 calling monitor election Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625204 calling monitor election Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625201 is new leader, mons np0005625201,np0005625200,np0005625204 in quorum (ranks 0,1,2) Feb 20 04:42:58 localhost ceph-mon[292786]: Health check failed: 2/5 mons down, quorum np0005625201,np0005625200,np0005625204 (MON_DOWN) Feb 20 04:42:58 localhost ceph-mon[292786]: Health detail: HEALTH_WARN 2/5 mons down, quorum np0005625201,np0005625200,np0005625204 Feb 20 04:42:58 localhost ceph-mon[292786]: [WRN] MON_DOWN: 2/5 mons down, quorum np0005625201,np0005625200,np0005625204 Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625203 (rank 3) addr [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] is down (out of quorum) Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625202 (rank 4) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum) Feb 20 04:42:58 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:58 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:58 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:42:58 localhost ceph-mon[292786]: mgrc update_daemon_metadata mon.np0005625202 metadata {addrs=[v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable),ceph_version_short=18.2.1-381.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005625202.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005625202.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625203 calling monitor election Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625202 calling monitor election Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625201 calling monitor election Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625204 calling monitor election Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625200 calling monitor election Feb 20 04:42:58 localhost ceph-mon[292786]: mon.np0005625201 is new leader, mons np0005625201,np0005625200,np0005625204,np0005625203,np0005625202 in quorum (ranks 0,1,2,3,4) Feb 20 04:42:58 localhost ceph-mon[292786]: Health check cleared: MON_DOWN (was: 2/5 mons down, quorum np0005625201,np0005625200,np0005625204) Feb 20 04:42:58 localhost ceph-mon[292786]: Cluster is now healthy Feb 20 04:42:58 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:42:58 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:59 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:59 localhost ceph-mon[292786]: Reconfiguring osd.3 (monmap changed)... Feb 20 04:42:59 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 20 04:42:59 localhost ceph-mon[292786]: Reconfiguring daemon osd.3 on np0005625204.localdomain Feb 20 04:42:59 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:59 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:42:59 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:43:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:43:00 localhost podman[292868]: 2026-02-20 09:43:00.437358713 +0000 UTC m=+0.076638881 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:43:00 localhost podman[292868]: 2026-02-20 09:43:00.445024832 +0000 UTC m=+0.084304990 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:43:00 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:43:00 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)... Feb 20 04:43:00 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain Feb 20 04:43:00 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:00 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:00 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:43:01 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)... Feb 20 04:43:01 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain Feb 20 04:43:01 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:01 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:02 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_auth_request failed to assign global_id Feb 20 04:43:02 localhost podman[292998]: 2026-02-20 09:43:02.428626732 +0000 UTC m=+0.091952028 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, release=1770267347, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7) Feb 20 04:43:02 localhost podman[292998]: 2026-02-20 09:43:02.533115284 +0000 UTC m=+0.196440570 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=1770267347, vcs-type=git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:43:03 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Feb 20 04:43:03 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2943791233' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Feb 20 04:43:04 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:04 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:04 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:43:05 localhost ceph-mon[292786]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:05 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:05 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:05 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:05 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:05 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:05 localhost ceph-mon[292786]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:05 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:05 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:05 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:43:05.909 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:43:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:43:05.910 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:43:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:43:05.911 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: Reconfig service osd.default_drive_group Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:06 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:43:06 localhost sshd[293523]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:43:07 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e85 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Feb 20 04:43:07 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e85 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Feb 20 04:43:07 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e86 e86: 6 total, 6 up, 6 in Feb 20 04:43:07 localhost systemd[1]: session-64.scope: Deactivated successfully. Feb 20 04:43:07 localhost systemd[1]: session-64.scope: Consumed 25.180s CPU time. Feb 20 04:43:07 localhost systemd-logind[760]: Session 64 logged out. Waiting for processes to exit. Feb 20 04:43:07 localhost systemd-logind[760]: Removed session 64. Feb 20 04:43:07 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e86 _set_new_cache_sizes cache_size:1019599982 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:43:07 localhost ceph-mon[292786]: Reconfiguring crash.np0005625200 (monmap changed)... Feb 20 04:43:07 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain Feb 20 04:43:07 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:07 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' Feb 20 04:43:07 localhost ceph-mon[292786]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:43:07 localhost ceph-mon[292786]: from='client.? 172.18.0.200:0/863103056' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 20 04:43:07 localhost ceph-mon[292786]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 20 04:43:07 localhost ceph-mon[292786]: Activating manager daemon np0005625199.ileebh Feb 20 04:43:07 localhost ceph-mon[292786]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 20 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:43:10 localhost podman[293525]: 2026-02-20 09:43:10.44999304 +0000 UTC m=+0.084585556 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:43:10 localhost podman[293525]: 2026-02-20 09:43:10.46192133 +0000 UTC m=+0.096513896 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:43:10 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:43:12 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e86 _set_new_cache_sizes cache_size:1020044835 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:43:13 localhost sshd[293549]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:43:16 localhost podman[241347]: time="2026-02-20T09:43:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:43:16 localhost podman[241347]: @ - - [20/Feb/2026:09:43:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:43:16 localhost podman[241347]: @ - - [20/Feb/2026:09:43:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18738 "" "Go-http-client/1.1" Feb 20 04:43:17 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e86 _set_new_cache_sizes cache_size:1020054514 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.260 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:43:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:43:17 localhost systemd[1]: Stopping User Manager for UID 1002... Feb 20 04:43:17 localhost systemd[26547]: Activating special unit Exit the Session... Feb 20 04:43:17 localhost systemd[26547]: Removed slice User Background Tasks Slice. Feb 20 04:43:17 localhost systemd[26547]: Stopped target Main User Target. Feb 20 04:43:17 localhost systemd[26547]: Stopped target Basic System. Feb 20 04:43:17 localhost systemd[26547]: Stopped target Paths. Feb 20 04:43:17 localhost systemd[26547]: Stopped target Sockets. Feb 20 04:43:17 localhost systemd[26547]: Stopped target Timers. Feb 20 04:43:17 localhost systemd[26547]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 20 04:43:17 localhost systemd[26547]: Stopped Daily Cleanup of User's Temporary Directories. Feb 20 04:43:17 localhost systemd[26547]: Closed D-Bus User Message Bus Socket. Feb 20 04:43:17 localhost systemd[26547]: Stopped Create User's Volatile Files and Directories. Feb 20 04:43:17 localhost systemd[26547]: Removed slice User Application Slice. Feb 20 04:43:17 localhost systemd[26547]: Reached target Shutdown. Feb 20 04:43:17 localhost systemd[26547]: Finished Exit the Session. Feb 20 04:43:17 localhost systemd[26547]: Reached target Exit the Session. Feb 20 04:43:17 localhost systemd[1]: user@1002.service: Deactivated successfully. Feb 20 04:43:17 localhost systemd[1]: Stopped User Manager for UID 1002. Feb 20 04:43:17 localhost systemd[1]: user@1002.service: Consumed 11.129s CPU time, read 0B from disk, written 7.0K to disk. Feb 20 04:43:17 localhost systemd[1]: Stopping User Runtime Directory /run/user/1002... Feb 20 04:43:17 localhost systemd[1]: run-user-1002.mount: Deactivated successfully. Feb 20 04:43:17 localhost systemd[1]: user-runtime-dir@1002.service: Deactivated successfully. Feb 20 04:43:17 localhost systemd[1]: Stopped User Runtime Directory /run/user/1002. Feb 20 04:43:17 localhost systemd[1]: Removed slice User Slice of UID 1002. Feb 20 04:43:17 localhost systemd[1]: user-1002.slice: Consumed 4min 989ms CPU time. Feb 20 04:43:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:43:21 localhost systemd[1]: tmp-crun.qMCum0.mount: Deactivated successfully. Feb 20 04:43:21 localhost podman[293552]: 2026-02-20 09:43:21.459606146 +0000 UTC m=+0.095625233 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 20 04:43:21 localhost podman[293552]: 2026-02-20 09:43:21.468452526 +0000 UTC m=+0.104471603 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute) Feb 20 04:43:21 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:43:22 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e86 _set_new_cache_sizes cache_size:1020054727 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:43:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:43:22 localhost podman[293571]: 2026-02-20 09:43:22.426872044 +0000 UTC m=+0.073201411 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.7, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, distribution-scope=public, config_id=openstack_network_exporter) Feb 20 04:43:22 localhost podman[293571]: 2026-02-20 09:43:22.471287977 +0000 UTC m=+0.117617354 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.7, architecture=x86_64, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 20 04:43:22 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:43:24 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Feb 20 04:43:24 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:43:24.966664) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:43:24 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Feb 20 04:43:24 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580604966740, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 10779, "num_deletes": 255, "total_data_size": 17449053, "memory_usage": 18076640, "flush_reason": "Manual Compaction"} Feb 20 04:43:24 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580605051057, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 15067291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 10784, "table_properties": {"data_size": 15006204, "index_size": 34741, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 267092, "raw_average_key_size": 26, "raw_value_size": 14830112, "raw_average_value_size": 1467, "num_data_blocks": 1343, "num_entries": 10108, "num_filter_entries": 10108, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 1771580572, "file_creation_time": 1771580604, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 84465 microseconds, and 31407 cpu microseconds. Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:43:25.051124) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 15067291 bytes OK Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:43:25.051153) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:43:25.052938) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:43:25.052962) EVENT_LOG_v1 {"time_micros": 1771580605052954, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:43:25.052985) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 17375241, prev total WAL file size 17375241, number of live WAL files 2. Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:43:25.055791) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end) Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(14MB) 8(2012B)] Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580605055958, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 15069303, "oldest_snapshot_seqno": -1} Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9858 keys, 15064028 bytes, temperature: kUnknown Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580605155319, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 15064028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15003567, "index_size": 34696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24709, "raw_key_size": 262323, "raw_average_key_size": 26, "raw_value_size": 14830767, "raw_average_value_size": 1504, "num_data_blocks": 1342, "num_entries": 9858, "num_filter_entries": 9858, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771580605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:43:25.155720) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 15064028 bytes Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:43:25.157542) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.4 rd, 151.3 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(14.4, 0.0 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10113, records dropped: 255 output_compression: NoCompression Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:43:25.157570) EVENT_LOG_v1 {"time_micros": 1771580605157558, "job": 4, "event": "compaction_finished", "compaction_time_micros": 99539, "compaction_time_cpu_micros": 42447, "output_level": 6, "num_output_files": 1, "total_output_size": 15064028, "num_input_records": 10113, "num_output_records": 9858, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580605160011, "job": 4, "event": "table_file_deletion", "file_number": 14} Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580605160078, "job": 4, "event": "table_file_deletion", "file_number": 8} Feb 20 04:43:25 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:43:25.055665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:43:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:43:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:43:26 localhost podman[293594]: 2026-02-20 09:43:26.443841786 +0000 UTC m=+0.080066459 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:43:26 localhost podman[293594]: 2026-02-20 09:43:26.476828873 +0000 UTC m=+0.113053546 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 20 04:43:26 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:43:26 localhost podman[293593]: 2026-02-20 09:43:26.492955792 +0000 UTC m=+0.130926880 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:43:26 localhost podman[293593]: 2026-02-20 09:43:26.529791918 +0000 UTC m=+0.167763056 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:43:26 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:43:27 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:43:28 localhost openstack_network_exporter[243776]: ERROR 09:43:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:43:28 localhost openstack_network_exporter[243776]: Feb 20 04:43:28 localhost openstack_network_exporter[243776]: ERROR 09:43:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:43:28 localhost openstack_network_exporter[243776]: Feb 20 04:43:29 localhost systemd[1]: session-65.scope: Deactivated successfully. Feb 20 04:43:29 localhost systemd[1]: session-65.scope: Consumed 1.708s CPU time. Feb 20 04:43:29 localhost systemd-logind[760]: Session 65 logged out. Waiting for processes to exit. Feb 20 04:43:29 localhost systemd-logind[760]: Removed session 65. Feb 20 04:43:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:43:31 localhost podman[293635]: 2026-02-20 09:43:31.441267748 +0000 UTC m=+0.078434707 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:43:31 localhost podman[293635]: 2026-02-20 09:43:31.455140108 +0000 UTC m=+0.092307077 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:43:31 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:43:32 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:43:37 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:43:38 localhost sshd[293656]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:43:39 localhost systemd[1]: Stopping User Manager for UID 1003... Feb 20 04:43:39 localhost systemd[290841]: Activating special unit Exit the Session... Feb 20 04:43:39 localhost systemd[290841]: Stopped target Main User Target. Feb 20 04:43:39 localhost systemd[290841]: Stopped target Basic System. Feb 20 04:43:39 localhost systemd[290841]: Stopped target Paths. Feb 20 04:43:39 localhost systemd[290841]: Stopped target Sockets. Feb 20 04:43:39 localhost systemd[290841]: Stopped target Timers. Feb 20 04:43:39 localhost systemd[290841]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 20 04:43:39 localhost systemd[290841]: Stopped Daily Cleanup of User's Temporary Directories. Feb 20 04:43:39 localhost systemd[290841]: Closed D-Bus User Message Bus Socket. Feb 20 04:43:39 localhost systemd[290841]: Stopped Create User's Volatile Files and Directories. Feb 20 04:43:39 localhost systemd[290841]: Removed slice User Application Slice. Feb 20 04:43:39 localhost systemd[290841]: Reached target Shutdown. Feb 20 04:43:39 localhost systemd[290841]: Finished Exit the Session. Feb 20 04:43:39 localhost systemd[290841]: Reached target Exit the Session. Feb 20 04:43:39 localhost systemd[1]: user@1003.service: Deactivated successfully. Feb 20 04:43:39 localhost systemd[1]: Stopped User Manager for UID 1003. Feb 20 04:43:39 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Feb 20 04:43:39 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Feb 20 04:43:39 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Feb 20 04:43:39 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Feb 20 04:43:39 localhost systemd[1]: Removed slice User Slice of UID 1003. Feb 20 04:43:39 localhost systemd[1]: user-1003.slice: Consumed 2.193s CPU time. Feb 20 04:43:39 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e87 e87: 6 total, 6 up, 6 in Feb 20 04:43:39 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Feb 20 04:43:40 localhost ceph-mon[292786]: Activating manager daemon np0005625200.ypbkax Feb 20 04:43:40 localhost ceph-mon[292786]: Manager daemon np0005625199.ileebh is unresponsive, replacing it with standby daemon np0005625200.ypbkax Feb 20 04:43:40 localhost sshd[293659]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:43:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:43:40 localhost systemd[1]: Created slice User Slice of UID 1002. Feb 20 04:43:40 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Feb 20 04:43:40 localhost systemd-logind[760]: New session 67 of user ceph-admin. Feb 20 04:43:40 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Feb 20 04:43:40 localhost systemd[1]: Starting User Manager for UID 1002... Feb 20 04:43:40 localhost podman[293661]: 2026-02-20 09:43:40.610546124 +0000 UTC m=+0.088116368 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:43:40 localhost podman[293661]: 2026-02-20 09:43:40.648742666 +0000 UTC m=+0.126312870 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:43:40 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:43:40 localhost systemd[293679]: Queued start job for default target Main User Target. Feb 20 04:43:40 localhost systemd[293679]: Created slice User Application Slice. Feb 20 04:43:40 localhost systemd[293679]: Started Mark boot as successful after the user session has run 2 minutes. Feb 20 04:43:40 localhost systemd[293679]: Started Daily Cleanup of User's Temporary Directories. Feb 20 04:43:40 localhost systemd[293679]: Reached target Paths. Feb 20 04:43:40 localhost systemd[293679]: Reached target Timers. Feb 20 04:43:40 localhost systemd[293679]: Starting D-Bus User Message Bus Socket... Feb 20 04:43:40 localhost systemd[293679]: Starting Create User's Volatile Files and Directories... Feb 20 04:43:40 localhost systemd[293679]: Finished Create User's Volatile Files and Directories. Feb 20 04:43:40 localhost systemd[293679]: Listening on D-Bus User Message Bus Socket. Feb 20 04:43:40 localhost systemd[293679]: Reached target Sockets. Feb 20 04:43:40 localhost systemd[293679]: Reached target Basic System. Feb 20 04:43:40 localhost systemd[293679]: Reached target Main User Target. Feb 20 04:43:40 localhost systemd[293679]: Startup finished in 165ms. Feb 20 04:43:40 localhost systemd[1]: Started User Manager for UID 1002. Feb 20 04:43:40 localhost systemd[1]: Started Session 67 of User ceph-admin. Feb 20 04:43:41 localhost ceph-mon[292786]: Manager daemon np0005625200.ypbkax is now available Feb 20 04:43:41 localhost ceph-mon[292786]: removing stray HostCache host record np0005625199.localdomain.devices.0 Feb 20 04:43:41 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain.devices.0"} : dispatch Feb 20 04:43:41 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain.devices.0"} : dispatch Feb 20 04:43:41 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain.devices.0"}]': finished Feb 20 04:43:41 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain.devices.0"} : dispatch Feb 20 04:43:41 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain.devices.0"} : dispatch Feb 20 04:43:41 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain.devices.0"}]': finished Feb 20 04:43:41 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625200.ypbkax/mirror_snapshot_schedule"} : dispatch Feb 20 04:43:41 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625200.ypbkax/mirror_snapshot_schedule"} : dispatch Feb 20 04:43:41 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625200.ypbkax/trash_purge_schedule"} : dispatch Feb 20 04:43:41 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625200.ypbkax/trash_purge_schedule"} : dispatch Feb 20 04:43:42 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:43:42 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:42 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:42 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:42 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:42 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:42 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:42 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:42 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:42 localhost ceph-mon[292786]: [20/Feb/2026:09:43:41] ENGINE Bus STARTING Feb 20 04:43:42 localhost podman[293864]: 2026-02-20 09:43:42.547029932 +0000 UTC m=+0.086453115 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=) Feb 20 04:43:42 localhost podman[293864]: 2026-02-20 09:43:42.676829181 +0000 UTC m=+0.216252414 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container) Feb 20 04:43:43 localhost ceph-mon[292786]: [20/Feb/2026:09:43:41] ENGINE Serving on https://172.18.0.104:7150 Feb 20 04:43:43 localhost ceph-mon[292786]: [20/Feb/2026:09:43:41] ENGINE Client ('172.18.0.104', 40458) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 20 04:43:43 localhost ceph-mon[292786]: [20/Feb/2026:09:43:41] ENGINE Serving on http://172.18.0.104:8765 Feb 20 04:43:43 localhost ceph-mon[292786]: [20/Feb/2026:09:43:41] ENGINE Bus STARTED Feb 20 04:43:43 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:43 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:43 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:43 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:43 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:43 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:43 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:43 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:44 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:44 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:44 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:44 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:44 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch Feb 20 04:43:44 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch Feb 20 04:43:44 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:44 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:45 localhost ceph-mon[292786]: Saving service mon spec with placement label:mon Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:43:45 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:43:45 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:43:46 localhost podman[241347]: time="2026-02-20T09:43:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:43:46 localhost podman[241347]: @ - - [20/Feb/2026:09:43:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:43:46 localhost podman[241347]: @ - - [20/Feb/2026:09:43:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18748 "" "Go-http-client/1.1" Feb 20 04:43:46 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:43:46 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:43:46 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:43:46 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:43:46 localhost ceph-mon[292786]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:46 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:46 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:46 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:46 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:46 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:46 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:47 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:43:47 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Feb 20 04:43:47 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/3259045040' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Feb 20 04:43:47 localhost nova_compute[280804]: 2026-02-20 09:43:47.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:43:47 localhost nova_compute[280804]: 2026-02-20 09:43:47.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:43:47 localhost nova_compute[280804]: 2026-02-20 09:43:47.512 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:43:47 localhost nova_compute[280804]: 2026-02-20 09:43:47.532 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:43:47 localhost nova_compute[280804]: 2026-02-20 09:43:47.533 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:43:47 localhost nova_compute[280804]: 2026-02-20 09:43:47.551 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:43:47 localhost nova_compute[280804]: 2026-02-20 09:43:47.552 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:43:47 localhost nova_compute[280804]: 2026-02-20 09:43:47.552 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:43:47 localhost nova_compute[280804]: 2026-02-20 09:43:47.553 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:43:47 localhost nova_compute[280804]: 2026-02-20 09:43:47.554 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:43:47 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:47 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:47 localhost ceph-mon[292786]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:47 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:47 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:47 localhost ceph-mon[292786]: Updating np0005625200.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:47 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:47 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.026 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.208 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.210 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=12044MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.210 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.211 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.283 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.284 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.302 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:43:48 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:43:48 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4196256663' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.734 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.740 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:43:48 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:43:48 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:43:48 localhost ceph-mon[292786]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:43:48 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:43:48 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:48 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.751 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.752 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:43:48 localhost nova_compute[280804]: 2026-02-20 09:43:48.752 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:43:49 localhost nova_compute[280804]: 2026-02-20 09:43:49.730 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:43:49 localhost nova_compute[280804]: 2026-02-20 09:43:49.730 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:43:49 localhost nova_compute[280804]: 2026-02-20 09:43:49.745 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:43:49 localhost nova_compute[280804]: 2026-02-20 09:43:49.746 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:43:49 localhost ceph-mon[292786]: Reconfiguring mon.np0005625200 (monmap changed)... Feb 20 04:43:49 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625200 on np0005625200.localdomain Feb 20 04:43:49 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:49 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:49 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:43:49 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:43:50 localhost nova_compute[280804]: 2026-02-20 09:43:50.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:43:50 localhost nova_compute[280804]: 2026-02-20 09:43:50.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:43:50 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625200.ypbkax (monmap changed)... Feb 20 04:43:50 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625200.ypbkax on np0005625200.localdomain Feb 20 04:43:50 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:50 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:50 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:50 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:43:51 localhost nova_compute[280804]: 2026-02-20 09:43:51.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:43:51 localhost nova_compute[280804]: 2026-02-20 09:43:51.512 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:43:51 localhost ceph-mon[292786]: Reconfiguring mon.np0005625201 (monmap changed)... Feb 20 04:43:51 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain Feb 20 04:43:51 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:51 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:51 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:43:51 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:43:52 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:43:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:43:52 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) Feb 20 04:43:52 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/3972118785' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch Feb 20 04:43:52 localhost podman[294806]: 2026-02-20 09:43:52.452354184 +0000 UTC m=+0.089406992 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute) Feb 20 04:43:52 localhost podman[294806]: 2026-02-20 09:43:52.487693772 +0000 UTC m=+0.124746560 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:43:52 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:43:52 localhost nova_compute[280804]: 2026-02-20 09:43:52.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:43:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:43:52 localhost podman[294826]: 2026-02-20 09:43:52.615239083 +0000 UTC m=+0.085217684 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, container_name=openstack_network_exporter) Feb 20 04:43:52 localhost podman[294826]: 2026-02-20 09:43:52.628531627 +0000 UTC m=+0.098510278 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 20 04:43:52 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:43:52 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)... Feb 20 04:43:52 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain Feb 20 04:43:52 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:52 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:52 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:43:52 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:43:53 localhost ceph-mon[292786]: Reconfiguring crash.np0005625201 (monmap changed)... Feb 20 04:43:53 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain Feb 20 04:43:53 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:53 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' Feb 20 04:43:53 localhost ceph-mon[292786]: Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:43:53 localhost ceph-mon[292786]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:43:53 localhost ceph-mon[292786]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:43:53 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:43:53 localhost podman[294898]: Feb 20 04:43:53 localhost podman[294898]: 2026-02-20 09:43:53.935711139 +0000 UTC m=+0.073863428 container create bc9c6614bf75884f105ca65dd4a453bf8562808c3939dddf15bdad11c67807a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_babbage, vcs-type=git, release=1770267347, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:43:53 localhost systemd[1]: Started libpod-conmon-bc9c6614bf75884f105ca65dd4a453bf8562808c3939dddf15bdad11c67807a0.scope. Feb 20 04:43:53 localhost systemd[1]: Started libcrun container. Feb 20 04:43:54 localhost podman[294898]: 2026-02-20 09:43:53.905098344 +0000 UTC m=+0.043250683 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:43:54 localhost podman[294898]: 2026-02-20 09:43:54.006311131 +0000 UTC m=+0.144463420 container init bc9c6614bf75884f105ca65dd4a453bf8562808c3939dddf15bdad11c67807a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_babbage, vcs-type=git, GIT_BRANCH=main, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vendor=Red Hat, Inc., version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container) Feb 20 04:43:54 localhost infallible_babbage[294913]: 167 167 Feb 20 04:43:54 localhost podman[294898]: 2026-02-20 09:43:54.020552341 +0000 UTC m=+0.158704630 container start bc9c6614bf75884f105ca65dd4a453bf8562808c3939dddf15bdad11c67807a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_babbage, vendor=Red Hat, Inc., RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:43:54 localhost systemd[1]: libpod-bc9c6614bf75884f105ca65dd4a453bf8562808c3939dddf15bdad11c67807a0.scope: Deactivated successfully. Feb 20 04:43:54 localhost podman[294898]: 2026-02-20 09:43:54.020778507 +0000 UTC m=+0.158930796 container attach bc9c6614bf75884f105ca65dd4a453bf8562808c3939dddf15bdad11c67807a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_babbage, GIT_BRANCH=main, release=1770267347, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, build-date=2026-02-09T10:25:24Z, distribution-scope=public, version=7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:43:54 localhost podman[294898]: 2026-02-20 09:43:54.022936453 +0000 UTC m=+0.161088752 container died bc9c6614bf75884f105ca65dd4a453bf8562808c3939dddf15bdad11c67807a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_babbage, vcs-type=git, name=rhceph, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, release=1770267347, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e88 e88: 6 total, 6 up, 6 in Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr handle_mgr_map Activating! Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr handle_mgr_map I am now activating Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625200"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625200"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625202"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005625202.akhmop"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata", "who": "mds.np0005625202.akhmop"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon).mds e17 all = 0 Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005625204.wnsphl"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata", "who": "mds.np0005625204.wnsphl"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon).mds e17 all = 0 Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005625203.zsrwgk"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata", "who": "mds.np0005625203.zsrwgk"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon).mds e17 all = 0 Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625201.mtnyvu", "id": "np0005625201.mtnyvu"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625201.mtnyvu", "id": "np0005625201.mtnyvu"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 0} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 1} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 2} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 3} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 4} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 5} : dispatch Feb 20 04:43:54 localhost podman[294918]: 2026-02-20 09:43:54.134495739 +0000 UTC m=+0.100302795 container remove bc9c6614bf75884f105ca65dd4a453bf8562808c3939dddf15bdad11c67807a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_babbage, vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, version=7, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, name=rhceph, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, build-date=2026-02-09T10:25:24Z, distribution-scope=public, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mds metadata"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon).mds e17 all = 1 Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "osd metadata"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mon metadata"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata"} : dispatch Feb 20 04:43:54 localhost systemd[1]: libpod-conmon-bc9c6614bf75884f105ca65dd4a453bf8562808c3939dddf15bdad11c67807a0.scope: Deactivated successfully. Feb 20 04:43:54 localhost ceph-mgr[286565]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: balancer Feb 20 04:43:54 localhost ceph-mgr[286565]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mgr[286565]: [balancer INFO root] Starting Feb 20 04:43:54 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:43:54 Feb 20 04:43:54 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:43:54 localhost ceph-mgr[286565]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Feb 20 04:43:54 localhost systemd-logind[760]: Session 67 logged out. Waiting for processes to exit. Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: cephadm Feb 20 04:43:54 localhost ceph-mgr[286565]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: crash Feb 20 04:43:54 localhost ceph-mgr[286565]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: devicehealth Feb 20 04:43:54 localhost ceph-mgr[286565]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: iostat Feb 20 04:43:54 localhost ceph-mgr[286565]: [devicehealth INFO root] Starting Feb 20 04:43:54 localhost ceph-mgr[286565]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost systemd[1]: session-67.scope: Deactivated successfully. Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: nfs Feb 20 04:43:54 localhost ceph-mgr[286565]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: orchestrator Feb 20 04:43:54 localhost systemd[1]: session-67.scope: Consumed 7.318s CPU time. Feb 20 04:43:54 localhost systemd-logind[760]: Removed session 67. Feb 20 04:43:54 localhost ceph-mgr[286565]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: pg_autoscaler Feb 20 04:43:54 localhost ceph-mgr[286565]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: progress Feb 20 04:43:54 localhost ceph-mgr[286565]: [progress INFO root] Loading... Feb 20 04:43:54 localhost ceph-mgr[286565]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Feb 20 04:43:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:43:54 localhost ceph-mgr[286565]: [progress INFO root] Loaded OSDMap, ready. Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] recovery thread starting Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] starting setup Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: rbd_support Feb 20 04:43:54 localhost ceph-mgr[286565]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: restful Feb 20 04:43:54 localhost ceph-mgr[286565]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: status Feb 20 04:43:54 localhost ceph-mgr[286565]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: telemetry Feb 20 04:43:54 localhost ceph-mgr[286565]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/mirror_snapshot_schedule"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/mirror_snapshot_schedule"} : dispatch Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:43:54 localhost ceph-mgr[286565]: [restful INFO root] server_addr: :: server_port: 8003 Feb 20 04:43:54 localhost ceph-mgr[286565]: [restful WARNING root] server not running: no certificate configured Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:43:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:43:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:43:54 localhost ceph-mgr[286565]: mgr load Constructed class from module: volumes Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] PerfHandler: starting Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_task_task: vms, start_after= Feb 20 04:43:54 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:43:54.296+0000 7f0cd71db640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:43:54.296+0000 7f0cd71db640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:43:54.296+0000 7f0cd71db640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:43:54.296+0000 7f0cd71db640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:43:54.296+0000 7f0cd71db640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:43:54.304+0000 7f0cdb1e3640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:43:54.304+0000 7f0cdb1e3640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:43:54.304+0000 7f0cdb1e3640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:43:54.304+0000 7f0cdb1e3640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:43:54.304+0000 7f0cdb1e3640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_task_task: volumes, start_after= Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_task_task: images, start_after= Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_task_task: backups, start_after= Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] TaskHandler: starting Feb 20 04:43:54 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/trash_purge_schedule"} v 0) Feb 20 04:43:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/trash_purge_schedule"} : dispatch Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Feb 20 04:43:54 localhost ceph-mgr[286565]: [rbd_support INFO root] setup complete Feb 20 04:43:54 localhost sshd[295076]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:43:54 localhost systemd-logind[760]: New session 69 of user ceph-admin. Feb 20 04:43:54 localhost systemd[1]: Started Session 69 of User ceph-admin. Feb 20 04:43:54 localhost ceph-mon[292786]: from='client.? 172.18.0.200:0/3880794004' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: Activating manager daemon np0005625202.arwxwo Feb 20 04:43:54 localhost ceph-mon[292786]: from='client.? 172.18.0.200:0/3880794004' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 20 04:43:54 localhost ceph-mon[292786]: Manager daemon np0005625202.arwxwo is now available Feb 20 04:43:54 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/mirror_snapshot_schedule"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/mirror_snapshot_schedule"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/trash_purge_schedule"} : dispatch Feb 20 04:43:54 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/trash_purge_schedule"} : dispatch Feb 20 04:43:54 localhost systemd[1]: tmp-crun.OWpIfU.mount: Deactivated successfully. Feb 20 04:43:54 localhost systemd[1]: var-lib-containers-storage-overlay-4e6ec21f86d67b45d73eba58f970124eeea7b6824bd07f017af016a6a0a214b8-merged.mount: Deactivated successfully. Feb 20 04:43:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:43:55 localhost ceph-mgr[286565]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:43:55] ENGINE Bus STARTING Feb 20 04:43:55 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:43:55] ENGINE Bus STARTING Feb 20 04:43:55 localhost ceph-mgr[286565]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:43:55] ENGINE Serving on http://172.18.0.106:8765 Feb 20 04:43:55 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:43:55] ENGINE Serving on http://172.18.0.106:8765 Feb 20 04:43:55 localhost ceph-mgr[286565]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:43:55] ENGINE Serving on https://172.18.0.106:7150 Feb 20 04:43:55 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:43:55] ENGINE Serving on https://172.18.0.106:7150 Feb 20 04:43:55 localhost ceph-mgr[286565]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:43:55] ENGINE Bus STARTED Feb 20 04:43:55 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:43:55] ENGINE Bus STARTED Feb 20 04:43:55 localhost ceph-mgr[286565]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:43:55] ENGINE Client ('172.18.0.106', 42946) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 20 04:43:55 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:43:55] ENGINE Client ('172.18.0.106', 42946) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 20 04:43:55 localhost podman[295206]: 2026-02-20 09:43:55.625912833 +0000 UTC m=+0.095094960 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , release=1770267347, GIT_CLEAN=True, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:43:55 localhost podman[295206]: 2026-02-20 09:43:55.759734007 +0000 UTC m=+0.228916174 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, architecture=x86_64, GIT_CLEAN=True, ceph=True) Feb 20 04:43:56 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:43:56 localhost ceph-mon[292786]: [20/Feb/2026:09:43:55] ENGINE Bus STARTING Feb 20 04:43:56 localhost ceph-mon[292786]: [20/Feb/2026:09:43:55] ENGINE Serving on http://172.18.0.106:8765 Feb 20 04:43:56 localhost ceph-mon[292786]: [20/Feb/2026:09:43:55] ENGINE Serving on https://172.18.0.106:7150 Feb 20 04:43:56 localhost ceph-mon[292786]: [20/Feb/2026:09:43:55] ENGINE Bus STARTED Feb 20 04:43:56 localhost ceph-mon[292786]: [20/Feb/2026:09:43:55] ENGINE Client ('172.18.0.106', 42946) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 20 04:43:56 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:43:56 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:43:56 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain.devices.0}] v 0) Feb 20 04:43:56 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain}] v 0) Feb 20 04:43:56 localhost ceph-mgr[286565]: [devicehealth INFO root] Check health Feb 20 04:43:56 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:43:56 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:43:56 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:43:56 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:43:56 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:43:56 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:43:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:43:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:43:56 localhost systemd[1]: tmp-crun.QNRiNQ.mount: Deactivated successfully. Feb 20 04:43:56 localhost podman[295354]: 2026-02-20 09:43:56.689131882 +0000 UTC m=+0.099343520 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:43:56 localhost podman[295354]: 2026-02-20 09:43:56.725955188 +0000 UTC m=+0.136166836 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent) Feb 20 04:43:56 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:43:56 localhost systemd[1]: tmp-crun.sNIwQ1.mount: Deactivated successfully. Feb 20 04:43:56 localhost podman[295353]: 2026-02-20 09:43:56.800583055 +0000 UTC m=+0.211046030 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 20 04:43:56 localhost podman[295353]: 2026-02-20 09:43:56.865996493 +0000 UTC m=+0.276459468 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:43:56 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:43:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain.devices.0}] v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain}] v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 20 04:43:57 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 20 04:43:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 20 04:43:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 20 04:43:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 20 04:43:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO root] Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:43:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO root] Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:43:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:43:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:43:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO root] Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:43:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:43:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:43:58 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:43:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625200.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625200.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:43:58 localhost openstack_network_exporter[243776]: ERROR 09:43:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:43:58 localhost openstack_network_exporter[243776]: Feb 20 04:43:58 localhost openstack_network_exporter[243776]: ERROR 09:43:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:43:58 localhost openstack_network_exporter[243776]: Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:43:58 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:43:58 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:43:58 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:43:58 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:43:58 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:43:58 localhost ceph-mon[292786]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:43:58 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:58 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:43:59 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625200.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625200.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mgr.np0005625200.ypbkax 172.18.0.104:0/3785676728; not ready for session (expect reconnect) Feb 20 04:43:59 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:43:59 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:00 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:00 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:00 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:00 localhost ceph-mon[292786]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:00 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:00 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:00 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:44:00 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:44:00 localhost ceph-mon[292786]: Updating np0005625200.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:44:00 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:44:00 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:44:00 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:00 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625200.ypbkax", "id": "np0005625200.ypbkax"} v 0) Feb 20 04:44:00 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625200.ypbkax", "id": "np0005625200.ypbkax"} : dispatch Feb 20 04:44:00 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:00 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:00 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain.devices.0}] v 0) Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain}] v 0) Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:44:00 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 85a932d3-581c-4c20-92ab-13cc09660402 (Updating node-proxy deployment (+5 -> 5)) Feb 20 04:44:00 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 85a932d3-581c-4c20-92ab-13cc09660402 (Updating node-proxy deployment (+5 -> 5)) Feb 20 04:44:00 localhost ceph-mgr[286565]: [progress INFO root] Completed event 85a932d3-581c-4c20-92ab-13cc09660402 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Feb 20 04:44:00 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:44:00 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:44:01 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:01 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:01 localhost ceph-mon[292786]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:01 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:01 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:01 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:44:01 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:44:01 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:44:01 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:01 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:01 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:01 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:44:01 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:44:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:44:01 localhost podman[296192]: 2026-02-20 09:44:01.60856065 +0000 UTC m=+0.084679428 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:44:01 localhost podman[296192]: 2026-02-20 09:44:01.646785893 +0000 UTC m=+0.122904651 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:44:01 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:44:01 localhost podman[296232]: Feb 20 04:44:01 localhost podman[296232]: 2026-02-20 09:44:01.987894857 +0000 UTC m=+0.084173716 container create 07c70d3d6cb017016f0e65ad9220b400ca2b52f8e912a194f58722f789b7c12a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_sutherland, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, release=1770267347, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7) Feb 20 04:44:02 localhost systemd[1]: Started libpod-conmon-07c70d3d6cb017016f0e65ad9220b400ca2b52f8e912a194f58722f789b7c12a.scope. Feb 20 04:44:02 localhost systemd[1]: Started libcrun container. Feb 20 04:44:02 localhost podman[296232]: 2026-02-20 09:44:01.954871159 +0000 UTC m=+0.051150058 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:02 localhost podman[296232]: 2026-02-20 09:44:02.069611908 +0000 UTC m=+0.165890767 container init 07c70d3d6cb017016f0e65ad9220b400ca2b52f8e912a194f58722f789b7c12a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_sutherland, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vcs-type=git, architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, ceph=True, build-date=2026-02-09T10:25:24Z, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:44:02 localhost podman[296232]: 2026-02-20 09:44:02.086022654 +0000 UTC m=+0.182301503 container start 07c70d3d6cb017016f0e65ad9220b400ca2b52f8e912a194f58722f789b7c12a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_sutherland, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , release=1770267347, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, vcs-type=git, build-date=2026-02-09T10:25:24Z, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main) Feb 20 04:44:02 localhost podman[296232]: 2026-02-20 09:44:02.086323732 +0000 UTC m=+0.182602621 container attach 07c70d3d6cb017016f0e65ad9220b400ca2b52f8e912a194f58722f789b7c12a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_sutherland, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, release=1770267347, version=7, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-type=git, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:44:02 localhost systemd[1]: libpod-07c70d3d6cb017016f0e65ad9220b400ca2b52f8e912a194f58722f789b7c12a.scope: Deactivated successfully. Feb 20 04:44:02 localhost sleepy_sutherland[296247]: 167 167 Feb 20 04:44:02 localhost podman[296232]: 2026-02-20 09:44:02.0943421 +0000 UTC m=+0.190620959 container died 07c70d3d6cb017016f0e65ad9220b400ca2b52f8e912a194f58722f789b7c12a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_sutherland, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, release=1770267347, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7) Feb 20 04:44:02 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:02 localhost ceph-mon[292786]: Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:44:02 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:02 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:44:02 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s Feb 20 04:44:02 localhost podman[296253]: 2026-02-20 09:44:02.210485455 +0000 UTC m=+0.100160551 container remove 07c70d3d6cb017016f0e65ad9220b400ca2b52f8e912a194f58722f789b7c12a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_sutherland, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, name=rhceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, ceph=True, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z) Feb 20 04:44:02 localhost systemd[1]: libpod-conmon-07c70d3d6cb017016f0e65ad9220b400ca2b52f8e912a194f58722f789b7c12a.scope: Deactivated successfully. Feb 20 04:44:02 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:44:02 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:02 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:02 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Feb 20 04:44:02 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Feb 20 04:44:02 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Feb 20 04:44:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:44:02 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:02 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:44:02 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:44:02 localhost systemd[1]: tmp-crun.gVYUDt.mount: Deactivated successfully. Feb 20 04:44:02 localhost systemd[1]: var-lib-containers-storage-overlay-cec787af81dab02f10862cc14c056f1044e076ac7bf328452c5846ab245138fc-merged.mount: Deactivated successfully. Feb 20 04:44:03 localhost podman[296322]: Feb 20 04:44:03 localhost podman[296322]: 2026-02-20 09:44:03.026261351 +0000 UTC m=+0.077474482 container create 100be33ea46488e94fee070345c9474fa94891981266be3d7261f245fa43acc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_swirles, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, maintainer=Guillaume Abrioux , architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, name=rhceph, GIT_BRANCH=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:44:03 localhost systemd[1]: Started libpod-conmon-100be33ea46488e94fee070345c9474fa94891981266be3d7261f245fa43acc2.scope. Feb 20 04:44:03 localhost systemd[1]: Started libcrun container. Feb 20 04:44:03 localhost podman[296322]: 2026-02-20 09:44:03.0921315 +0000 UTC m=+0.143344631 container init 100be33ea46488e94fee070345c9474fa94891981266be3d7261f245fa43acc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_swirles, ceph=True, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, version=7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, name=rhceph, vcs-type=git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:44:03 localhost podman[296322]: 2026-02-20 09:44:02.998516211 +0000 UTC m=+0.049729342 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:03 localhost podman[296322]: 2026-02-20 09:44:03.103782714 +0000 UTC m=+0.154995855 container start 100be33ea46488e94fee070345c9474fa94891981266be3d7261f245fa43acc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_swirles, vendor=Red Hat, Inc., io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, name=rhceph, release=1770267347, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 04:44:03 localhost podman[296322]: 2026-02-20 09:44:03.104070041 +0000 UTC m=+0.155283172 container attach 100be33ea46488e94fee070345c9474fa94891981266be3d7261f245fa43acc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_swirles, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, version=7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., distribution-scope=public) Feb 20 04:44:03 localhost romantic_swirles[296338]: 167 167 Feb 20 04:44:03 localhost systemd[1]: libpod-100be33ea46488e94fee070345c9474fa94891981266be3d7261f245fa43acc2.scope: Deactivated successfully. Feb 20 04:44:03 localhost podman[296322]: 2026-02-20 09:44:03.108970207 +0000 UTC m=+0.160183338 container died 100be33ea46488e94fee070345c9474fa94891981266be3d7261f245fa43acc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_swirles, architecture=x86_64, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, RELEASE=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public) Feb 20 04:44:03 localhost podman[296343]: 2026-02-20 09:44:03.207077524 +0000 UTC m=+0.087158273 container remove 100be33ea46488e94fee070345c9474fa94891981266be3d7261f245fa43acc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_swirles, ceph=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, name=rhceph, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, build-date=2026-02-09T10:25:24Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 04:44:03 localhost systemd[1]: libpod-conmon-100be33ea46488e94fee070345c9474fa94891981266be3d7261f245fa43acc2.scope: Deactivated successfully. Feb 20 04:44:03 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:03 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:03 localhost ceph-mon[292786]: Reconfiguring osd.2 (monmap changed)... Feb 20 04:44:03 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:44:03 localhost ceph-mon[292786]: Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:44:03 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:03 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:03 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:03 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:03 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Feb 20 04:44:03 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Feb 20 04:44:03 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Feb 20 04:44:03 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:44:03 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:03 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:03 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:44:03 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:44:03 localhost systemd[1]: var-lib-containers-storage-overlay-065355bfc8b0c3e5d2ad04fb9da23f1ac33988fb046a61ca12732028212f6297-merged.mount: Deactivated successfully. Feb 20 04:44:04 localhost podman[296418]: Feb 20 04:44:04 localhost podman[296418]: 2026-02-20 09:44:04.072985022 +0000 UTC m=+0.080335697 container create 0336b507f81606abf8e5d476080f7efbb5011689ce9a810a099e7b2c02d808b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_bartik, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, GIT_BRANCH=main, io.buildah.version=1.42.2, vcs-type=git, RELEASE=main, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , version=7) Feb 20 04:44:04 localhost systemd[1]: Started libpod-conmon-0336b507f81606abf8e5d476080f7efbb5011689ce9a810a099e7b2c02d808b0.scope. Feb 20 04:44:04 localhost systemd[1]: Started libcrun container. Feb 20 04:44:04 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Feb 20 04:44:04 localhost podman[296418]: 2026-02-20 09:44:04.039418921 +0000 UTC m=+0.046769606 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:04 localhost podman[296418]: 2026-02-20 09:44:04.140784032 +0000 UTC m=+0.148134687 container init 0336b507f81606abf8e5d476080f7efbb5011689ce9a810a099e7b2c02d808b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_bartik, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.42.2, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , version=7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:44:04 localhost podman[296418]: 2026-02-20 09:44:04.148861202 +0000 UTC m=+0.156211857 container start 0336b507f81606abf8e5d476080f7efbb5011689ce9a810a099e7b2c02d808b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_bartik, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, release=1770267347, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-type=git) Feb 20 04:44:04 localhost podman[296418]: 2026-02-20 09:44:04.149079017 +0000 UTC m=+0.156429672 container attach 0336b507f81606abf8e5d476080f7efbb5011689ce9a810a099e7b2c02d808b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_bartik, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:44:04 localhost jolly_bartik[296433]: 167 167 Feb 20 04:44:04 localhost systemd[1]: libpod-0336b507f81606abf8e5d476080f7efbb5011689ce9a810a099e7b2c02d808b0.scope: Deactivated successfully. Feb 20 04:44:04 localhost podman[296418]: 2026-02-20 09:44:04.154583 +0000 UTC m=+0.161933685 container died 0336b507f81606abf8e5d476080f7efbb5011689ce9a810a099e7b2c02d808b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_bartik, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 04:44:04 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:44:04 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:44:04 localhost podman[296438]: 2026-02-20 09:44:04.262635644 +0000 UTC m=+0.093585929 container remove 0336b507f81606abf8e5d476080f7efbb5011689ce9a810a099e7b2c02d808b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_bartik, GIT_CLEAN=True, distribution-scope=public, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, ceph=True, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main) Feb 20 04:44:04 localhost systemd[1]: libpod-conmon-0336b507f81606abf8e5d476080f7efbb5011689ce9a810a099e7b2c02d808b0.scope: Deactivated successfully. Feb 20 04:44:04 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:04 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:04 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:04 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:04 localhost ceph-mon[292786]: Reconfiguring osd.5 (monmap changed)... Feb 20 04:44:04 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:44:04 localhost ceph-mon[292786]: Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:44:04 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:04 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:04 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:04 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:04 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:04 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:44:04 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:44:04 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:44:04 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:04 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:04 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:04 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:44:04 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:44:04 localhost systemd[1]: var-lib-containers-storage-overlay-67946c9258244b8b255f06ab4bb709a2c39ebc2e29f0a63f38c7eedfb8e1d446-merged.mount: Deactivated successfully. Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.002328) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645002369, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1802, "num_deletes": 254, "total_data_size": 9513569, "memory_usage": 10070536, "flush_reason": "Manual Compaction"} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645033139, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 5847284, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10790, "largest_seqno": 12586, "table_properties": {"data_size": 5839303, "index_size": 4614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 20862, "raw_average_key_size": 22, "raw_value_size": 5822041, "raw_average_value_size": 6383, "num_data_blocks": 197, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580607, "oldest_key_time": 1771580607, "file_creation_time": 1771580645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 30846 microseconds, and 6053 cpu microseconds. Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.033178) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 5847284 bytes OK Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.033195) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.038251) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.038263) EVENT_LOG_v1 {"time_micros": 1771580645038260, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.038278) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 9504421, prev total WAL file size 9512525, number of live WAL files 2. Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.040161) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353036' seq:72057594037927935, type:22 .. '6D6772737461740033373537' seq:0, type:0; will stop at (end) Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(5710KB)], [15(14MB)] Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645040236, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 20911312, "oldest_snapshot_seqno": -1} Feb 20 04:44:05 localhost podman[296513]: Feb 20 04:44:05 localhost podman[296513]: 2026-02-20 09:44:05.184300399 +0000 UTC m=+0.086021913 container create cac0dd0fe76ee02be579d19116ac5a24ab2f16d97be75114d5f664411d505683 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_murdock, CEPH_POINT_RELEASE=, distribution-scope=public, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, build-date=2026-02-09T10:25:24Z, release=1770267347, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux ) Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 10237 keys, 18650733 bytes, temperature: kUnknown Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645185956, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 18650733, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18590829, "index_size": 33243, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 271662, "raw_average_key_size": 26, "raw_value_size": 18414486, "raw_average_value_size": 1798, "num_data_blocks": 1289, "num_entries": 10237, "num_filter_entries": 10237, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771580645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.186258) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 18650733 bytes Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.190553) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.4 rd, 127.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.6, 14.4 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 10770, records dropped: 533 output_compression: NoCompression Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.190584) EVENT_LOG_v1 {"time_micros": 1771580645190570, "job": 6, "event": "compaction_finished", "compaction_time_micros": 145805, "compaction_time_cpu_micros": 56898, "output_level": 6, "num_output_files": 1, "total_output_size": 18650733, "num_input_records": 10770, "num_output_records": 10237, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645191465, "job": 6, "event": "table_file_deletion", "file_number": 17} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645193671, "job": 6, "event": "table_file_deletion", "file_number": 15} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.039834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.193716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.193721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.193725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.193728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.193731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.194077) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 1 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645194112, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 275, "num_deletes": 264, "total_data_size": 20879, "memory_usage": 28064, "flush_reason": "Manual Compaction"} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645198522, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 13496, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12588, "largest_seqno": 12861, "table_properties": {"data_size": 11691, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4192, "raw_average_key_size": 15, "raw_value_size": 8092, "raw_average_value_size": 29, "num_data_blocks": 2, "num_entries": 274, "num_filter_entries": 274, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580645, "oldest_key_time": 1771580645, "file_creation_time": 1771580645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 4488 microseconds, and 860 cpu microseconds. Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.198562) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 13496 bytes OK Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.198582) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.200093) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.200113) EVENT_LOG_v1 {"time_micros": 1771580645200107, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.200127) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 18714, prev total WAL file size 18714, number of live WAL files 2. Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.200711) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303039' seq:72057594037927935, type:22 .. '6B760031323734' seq:0, type:0; will stop at (end) Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(13KB)], [18(17MB)] Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645200781, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18664229, "oldest_snapshot_seqno": -1} Feb 20 04:44:05 localhost systemd[1]: Started libpod-conmon-cac0dd0fe76ee02be579d19116ac5a24ab2f16d97be75114d5f664411d505683.scope. Feb 20 04:44:05 localhost systemd[1]: Started libcrun container. Feb 20 04:44:05 localhost podman[296513]: 2026-02-20 09:44:05.146493378 +0000 UTC m=+0.048214942 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:05 localhost podman[296513]: 2026-02-20 09:44:05.321022918 +0000 UTC m=+0.222744402 container init cac0dd0fe76ee02be579d19116ac5a24ab2f16d97be75114d5f664411d505683 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_murdock, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, GIT_CLEAN=True, version=7, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, release=1770267347) Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 9974 keys, 17686581 bytes, temperature: kUnknown Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645323432, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 17686581, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17629777, "index_size": 30783, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 267721, "raw_average_key_size": 26, "raw_value_size": 17459210, "raw_average_value_size": 1750, "num_data_blocks": 1165, "num_entries": 9974, "num_filter_entries": 9974, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771580645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.323672) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 17686581 bytes Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.325183) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.1 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 17.8 +0.0 blob) out(16.9 +0.0 blob), read-write-amplify(2693.5) write-amplify(1310.5) OK, records in: 10511, records dropped: 537 output_compression: NoCompression Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.325203) EVENT_LOG_v1 {"time_micros": 1771580645325195, "job": 8, "event": "compaction_finished", "compaction_time_micros": 122730, "compaction_time_cpu_micros": 42624, "output_level": 6, "num_output_files": 1, "total_output_size": 17686581, "num_input_records": 10511, "num_output_records": 9974, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645325397, "job": 8, "event": "table_file_deletion", "file_number": 20} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645327070, "job": 8, "event": "table_file_deletion", "file_number": 18} Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.200641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.327163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.327170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.327171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.327173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:05 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:05.327180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:05 localhost podman[296513]: 2026-02-20 09:44:05.340586516 +0000 UTC m=+0.242308000 container start cac0dd0fe76ee02be579d19116ac5a24ab2f16d97be75114d5f664411d505683 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_murdock, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., version=7) Feb 20 04:44:05 localhost podman[296513]: 2026-02-20 09:44:05.340917664 +0000 UTC m=+0.242639148 container attach cac0dd0fe76ee02be579d19116ac5a24ab2f16d97be75114d5f664411d505683 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_murdock, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:44:05 localhost keen_murdock[296528]: 167 167 Feb 20 04:44:05 localhost systemd[1]: libpod-cac0dd0fe76ee02be579d19116ac5a24ab2f16d97be75114d5f664411d505683.scope: Deactivated successfully. Feb 20 04:44:05 localhost podman[296513]: 2026-02-20 09:44:05.343712407 +0000 UTC m=+0.245433941 container died cac0dd0fe76ee02be579d19116ac5a24ab2f16d97be75114d5f664411d505683 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_murdock, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux , distribution-scope=public) Feb 20 04:44:05 localhost podman[296533]: 2026-02-20 09:44:05.424943306 +0000 UTC m=+0.072320789 container remove cac0dd0fe76ee02be579d19116ac5a24ab2f16d97be75114d5f664411d505683 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_murdock, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, release=1770267347, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.42.2, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:44:05 localhost systemd[1]: libpod-conmon-cac0dd0fe76ee02be579d19116ac5a24ab2f16d97be75114d5f664411d505683.scope: Deactivated successfully. Feb 20 04:44:05 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:05 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:05 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:05 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:05 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:05 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:44:05 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:05 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:44:05 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:05 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:05 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:44:05 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:44:05 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:44:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:05 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 20 04:44:05 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch Feb 20 04:44:05 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:05 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:05 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:44:05 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:44:05 localhost systemd[1]: tmp-crun.tUkghb.mount: Deactivated successfully. Feb 20 04:44:05 localhost systemd[1]: var-lib-containers-storage-overlay-133c0e7091e84097dc0e104d7ce7f33f88c99c13feee352b3f00fefa25d94c64-merged.mount: Deactivated successfully. Feb 20 04:44:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.34378 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Feb 20 04:44:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:44:05.911 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:44:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:44:05.912 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:44:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:44:05.912 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:44:06 localhost podman[296605]: Feb 20 04:44:06 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Feb 20 04:44:06 localhost podman[296605]: 2026-02-20 09:44:06.149983616 +0000 UTC m=+0.079239997 container create c522aa98b7b15b34812509cf9ecb1ca69c0629480e6e6e9dcf8074a8766c9037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_cori, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, RELEASE=main, vcs-type=git, ceph=True, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, release=1770267347, GIT_CLEAN=True, distribution-scope=public, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 04:44:06 localhost systemd[1]: Started libpod-conmon-c522aa98b7b15b34812509cf9ecb1ca69c0629480e6e6e9dcf8074a8766c9037.scope. Feb 20 04:44:06 localhost systemd[1]: Started libcrun container. Feb 20 04:44:06 localhost podman[296605]: 2026-02-20 09:44:06.11774439 +0000 UTC m=+0.047000811 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:06 localhost podman[296605]: 2026-02-20 09:44:06.228208187 +0000 UTC m=+0.157464578 container init c522aa98b7b15b34812509cf9ecb1ca69c0629480e6e6e9dcf8074a8766c9037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_cori, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, release=1770267347, distribution-scope=public, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, RELEASE=main, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , architecture=x86_64) Feb 20 04:44:06 localhost podman[296605]: 2026-02-20 09:44:06.236991795 +0000 UTC m=+0.166248186 container start c522aa98b7b15b34812509cf9ecb1ca69c0629480e6e6e9dcf8074a8766c9037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_cori, io.buildah.version=1.42.2, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, ceph=True, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, version=7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:44:06 localhost podman[296605]: 2026-02-20 09:44:06.237225201 +0000 UTC m=+0.166481582 container attach c522aa98b7b15b34812509cf9ecb1ca69c0629480e6e6e9dcf8074a8766c9037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_cori, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, name=rhceph, RELEASE=main, architecture=x86_64, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.42.2, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347) Feb 20 04:44:06 localhost angry_cori[296620]: 167 167 Feb 20 04:44:06 localhost systemd[1]: libpod-c522aa98b7b15b34812509cf9ecb1ca69c0629480e6e6e9dcf8074a8766c9037.scope: Deactivated successfully. Feb 20 04:44:06 localhost podman[296605]: 2026-02-20 09:44:06.241154283 +0000 UTC m=+0.170410694 container died c522aa98b7b15b34812509cf9ecb1ca69c0629480e6e6e9dcf8074a8766c9037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_cori, io.buildah.version=1.42.2, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1770267347, distribution-scope=public, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64) Feb 20 04:44:06 localhost podman[296625]: 2026-02-20 09:44:06.346430306 +0000 UTC m=+0.091493737 container remove c522aa98b7b15b34812509cf9ecb1ca69c0629480e6e6e9dcf8074a8766c9037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_cori, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., release=1770267347, distribution-scope=public) Feb 20 04:44:06 localhost systemd[1]: libpod-conmon-c522aa98b7b15b34812509cf9ecb1ca69c0629480e6e6e9dcf8074a8766c9037.scope: Deactivated successfully. Feb 20 04:44:06 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:06 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:06 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005625202 (monmap changed)... Feb 20 04:44:06 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005625202 (monmap changed)... Feb 20 04:44:06 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 20 04:44:06 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:44:06 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 20 04:44:06 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 20 04:44:06 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:06 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:06 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:44:06 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:44:06 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:06 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:06 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:06 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:44:06 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:06 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:44:06 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:06 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:06 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:44:06 localhost systemd[1]: var-lib-containers-storage-overlay-fac3778e3d1b7ae9d98d9351be0a26ebb563505ca703be4e5539fa5ddc8de1bf-merged.mount: Deactivated successfully. Feb 20 04:44:07 localhost sshd[296695]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:44:07 localhost podman[296699]: Feb 20 04:44:07 localhost podman[296699]: 2026-02-20 09:44:07.119352479 +0000 UTC m=+0.080302946 container create 82f23b311b80024d0b03870669ea5a2ace99ed33bbff4ff644770dd4f1390f14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_murdock, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=) Feb 20 04:44:07 localhost systemd[1]: Started libpod-conmon-82f23b311b80024d0b03870669ea5a2ace99ed33bbff4ff644770dd4f1390f14.scope. Feb 20 04:44:07 localhost systemd[1]: Started libcrun container. Feb 20 04:44:07 localhost podman[296699]: 2026-02-20 09:44:07.087360049 +0000 UTC m=+0.048310546 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:07 localhost podman[296699]: 2026-02-20 09:44:07.192717203 +0000 UTC m=+0.153667670 container init 82f23b311b80024d0b03870669ea5a2ace99ed33bbff4ff644770dd4f1390f14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_murdock, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhceph, io.buildah.version=1.42.2, GIT_CLEAN=True) Feb 20 04:44:07 localhost podman[296699]: 2026-02-20 09:44:07.203238317 +0000 UTC m=+0.164188774 container start 82f23b311b80024d0b03870669ea5a2ace99ed33bbff4ff644770dd4f1390f14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_murdock, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, ceph=True, vendor=Red Hat, Inc., release=1770267347, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:44:07 localhost podman[296699]: 2026-02-20 09:44:07.203581406 +0000 UTC m=+0.164531903 container attach 82f23b311b80024d0b03870669ea5a2ace99ed33bbff4ff644770dd4f1390f14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_murdock, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.buildah.version=1.42.2, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:44:07 localhost kind_murdock[296714]: 167 167 Feb 20 04:44:07 localhost systemd[1]: libpod-82f23b311b80024d0b03870669ea5a2ace99ed33bbff4ff644770dd4f1390f14.scope: Deactivated successfully. Feb 20 04:44:07 localhost podman[296699]: 2026-02-20 09:44:07.20646419 +0000 UTC m=+0.167414707 container died 82f23b311b80024d0b03870669ea5a2ace99ed33bbff4ff644770dd4f1390f14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_murdock, ceph=True, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., release=1770267347, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, maintainer=Guillaume Abrioux ) Feb 20 04:44:07 localhost ceph-mon[292786]: mon.np0005625202@4(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:44:07 localhost podman[296719]: 2026-02-20 09:44:07.301276922 +0000 UTC m=+0.086423735 container remove 82f23b311b80024d0b03870669ea5a2ace99ed33bbff4ff644770dd4f1390f14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_murdock, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., release=1770267347, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:44:07 localhost systemd[1]: libpod-conmon-82f23b311b80024d0b03870669ea5a2ace99ed33bbff4ff644770dd4f1390f14.scope: Deactivated successfully. Feb 20 04:44:07 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:07 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:07 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:44:07 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:44:07 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:44:07 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:07 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:07 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:07 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:44:07 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:44:07 localhost ceph-mon[292786]: Reconfiguring mon.np0005625202 (monmap changed)... Feb 20 04:44:07 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:44:07 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:07 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:07 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:07 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:07 localhost systemd[1]: var-lib-containers-storage-overlay-43eafeec625dec2bffba1b85a1b218a42ed706869473e9851e9e1657322991c2-merged.mount: Deactivated successfully. Feb 20 04:44:08 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 20 04:44:08 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:08 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:08 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Feb 20 04:44:08 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Feb 20 04:44:08 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Feb 20 04:44:08 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:44:08 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:08 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:08 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:44:08 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:44:08 localhost ceph-mon[292786]: Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:44:08 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:44:08 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:08 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:08 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:44:08 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.27331 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625200", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Feb 20 04:44:09 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:09 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:09 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:09 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:09 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Feb 20 04:44:09 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Feb 20 04:44:09 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Feb 20 04:44:09 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:44:09 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:09 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:09 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:44:09 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:44:09 localhost ceph-mon[292786]: Reconfiguring osd.1 (monmap changed)... Feb 20 04:44:09 localhost ceph-mon[292786]: Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:44:09 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:09 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:09 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:09 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:09 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:44:10 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 20 04:44:10 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.27341 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005625200"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Feb 20 04:44:10 localhost ceph-mgr[286565]: [cephadm INFO root] Remove daemons mon.np0005625200 Feb 20 04:44:10 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005625200 Feb 20 04:44:10 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "quorum_status"} v 0) Feb 20 04:44:10 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "quorum_status"} : dispatch Feb 20 04:44:10 localhost ceph-mgr[286565]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005625200: new quorum should be ['np0005625201', 'np0005625204', 'np0005625203', 'np0005625202'] (from ['np0005625201', 'np0005625204', 'np0005625203', 'np0005625202']) Feb 20 04:44:10 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005625200: new quorum should be ['np0005625201', 'np0005625204', 'np0005625203', 'np0005625202'] (from ['np0005625201', 'np0005625204', 'np0005625203', 'np0005625202']) Feb 20 04:44:10 localhost ceph-mgr[286565]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005625200 from monmap... Feb 20 04:44:10 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removing monitor np0005625200 from monmap... Feb 20 04:44:10 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e9 handle_command mon_command({"prefix": "mon rm", "name": "np0005625200"} v 0) Feb 20 04:44:10 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon rm", "name": "np0005625200"} : dispatch Feb 20 04:44:10 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005625200 from np0005625200.localdomain -- ports [] Feb 20 04:44:10 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005625200 from np0005625200.localdomain -- ports [] Feb 20 04:44:10 localhost ceph-mon[292786]: mon.np0005625202@4(peon) e10 my rank is now 3 (was 4) Feb 20 04:44:10 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 20 04:44:10 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 20 04:44:10 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : mon.np0005625202 calling monitor election Feb 20 04:44:10 localhost ceph-mon[292786]: paxos.3).electionLogic(38) init, last seen epoch 38 Feb 20 04:44:10 localhost ceph-mon[292786]: mon.np0005625202@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:44:10 localhost ceph-mon[292786]: mon.np0005625202@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:44:10 localhost ceph-mon[292786]: mon.np0005625202@3(electing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:44:10 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:44:10 localhost ceph-mon[292786]: mon.np0005625202@3(electing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625202"} v 0) Feb 20 04:44:10 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch Feb 20 04:44:10 localhost ceph-mon[292786]: mon.np0005625202@3(electing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:44:10 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:44:10 localhost ceph-mon[292786]: mon.np0005625202@3(electing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0) Feb 20 04:44:10 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch Feb 20 04:44:10 localhost ceph-mon[292786]: mon.np0005625202@3(electing) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:44:11 localhost podman[296736]: 2026-02-20 09:44:11.451736769 +0000 UTC m=+0.087761600 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:44:11 localhost podman[296736]: 2026-02-20 09:44:11.46871088 +0000 UTC m=+0.104735721 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:44:11 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:44:12 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625202@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625202@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:12 localhost ceph-mon[292786]: Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:44:12 localhost ceph-mon[292786]: Remove daemons mon.np0005625200 Feb 20 04:44:12 localhost ceph-mon[292786]: Safe to remove mon.np0005625200: new quorum should be ['np0005625201', 'np0005625204', 'np0005625203', 'np0005625202'] (from ['np0005625201', 'np0005625204', 'np0005625203', 'np0005625202']) Feb 20 04:44:12 localhost ceph-mon[292786]: Removing monitor np0005625200 from monmap... Feb 20 04:44:12 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon rm", "name": "np0005625200"} : dispatch Feb 20 04:44:12 localhost ceph-mon[292786]: Removing daemon mon.np0005625200 from np0005625200.localdomain -- ports [] Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625203 calling monitor election Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625202 calling monitor election Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625204 calling monitor election Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625201 calling monitor election Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625201 is new leader, mons np0005625201,np0005625204,np0005625203,np0005625202 in quorum (ranks 0,1,2,3) Feb 20 04:44:12 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:44:12 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:12 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:12 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)... Feb 20 04:44:12 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)... Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:44:12 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:12 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:12 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:12 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain Feb 20 04:44:12 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain Feb 20 04:44:12 localhost sshd[296759]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:44:13 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:13 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:13 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625203.lonygy (monmap changed)... Feb 20 04:44:13 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625203.lonygy (monmap changed)... Feb 20 04:44:13 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:44:13 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:13 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 20 04:44:13 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch Feb 20 04:44:13 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:13 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:13 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain Feb 20 04:44:13 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain Feb 20 04:44:13 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:13 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:13 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:13 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)... Feb 20 04:44:13 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:13 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain Feb 20 04:44:13 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:13 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:13 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:13 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:14 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:14 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:14 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:14 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005625203 (monmap changed)... Feb 20 04:44:14 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005625203 (monmap changed)... Feb 20 04:44:14 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 20 04:44:14 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:44:14 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 20 04:44:14 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 20 04:44:14 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:14 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:14 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005625203 on np0005625203.localdomain Feb 20 04:44:14 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005625203 on np0005625203.localdomain Feb 20 04:44:14 localhost ceph-mds[283306]: mds.beacon.mds.np0005625202.akhmop missed beacon ack from the monitors Feb 20 04:44:14 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)... Feb 20 04:44:14 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain Feb 20 04:44:14 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:14 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:14 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:44:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.27452 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625200.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch Feb 20 04:44:15 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 20 04:44:15 localhost ceph-mgr[286565]: [cephadm INFO root] Removed label mon from host np0005625200.localdomain Feb 20 04:44:15 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removed label mon from host np0005625200.localdomain Feb 20 04:44:15 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:15 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:15 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625204 (monmap changed)... Feb 20 04:44:15 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625204 (monmap changed)... Feb 20 04:44:15 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:44:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:15 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:15 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:15 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain Feb 20 04:44:15 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain Feb 20 04:44:15 localhost ceph-mon[292786]: Reconfiguring mon.np0005625203 (monmap changed)... Feb 20 04:44:15 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625203 on np0005625203.localdomain Feb 20 04:44:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:15 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:16 localhost podman[241347]: time="2026-02-20T09:44:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:44:16 localhost podman[241347]: @ - - [20/Feb/2026:09:44:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:44:16 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:16 localhost podman[241347]: @ - - [20/Feb/2026:09:44:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18754 "" "Go-http-client/1.1" Feb 20 04:44:16 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:44:16 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:44:16 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Feb 20 04:44:16 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Feb 20 04:44:16 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Feb 20 04:44:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 20 04:44:16 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:16 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005625204.localdomain Feb 20 04:44:16 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005625204.localdomain Feb 20 04:44:16 localhost ceph-mon[292786]: Removed label mon from host np0005625200.localdomain Feb 20 04:44:16 localhost ceph-mon[292786]: Reconfiguring crash.np0005625204 (monmap changed)... Feb 20 04:44:16 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain Feb 20 04:44:16 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:16 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:16 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 20 04:44:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.44283 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625200.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch Feb 20 04:44:16 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 20 04:44:16 localhost ceph-mgr[286565]: [cephadm INFO root] Removed label mgr from host np0005625200.localdomain Feb 20 04:44:16 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005625200.localdomain Feb 20 04:44:17 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:44:17 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:44:17 localhost ceph-mon[292786]: mon.np0005625202@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:44:17 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:44:17 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:44:17 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Feb 20 04:44:17 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Feb 20 04:44:17 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Feb 20 04:44:17 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 20 04:44:17 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:17 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:17 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005625204.localdomain Feb 20 04:44:17 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005625204.localdomain Feb 20 04:44:17 localhost ceph-mon[292786]: Reconfiguring osd.0 (monmap changed)... Feb 20 04:44:17 localhost ceph-mon[292786]: Reconfiguring daemon osd.0 on np0005625204.localdomain Feb 20 04:44:17 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:17 localhost ceph-mon[292786]: Removed label mgr from host np0005625200.localdomain Feb 20 04:44:17 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:17 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:17 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:17 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:17 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 20 04:44:17 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.27464 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625200.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch Feb 20 04:44:17 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 20 04:44:17 localhost ceph-mgr[286565]: [cephadm INFO root] Removed label _admin from host np0005625200.localdomain Feb 20 04:44:17 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005625200.localdomain Feb 20 04:44:18 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:18 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:44:18 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:44:18 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:44:18 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:44:18 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)... Feb 20 04:44:18 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)... Feb 20 04:44:18 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:44:18 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:18 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:18 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:18 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain Feb 20 04:44:18 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain Feb 20 04:44:18 localhost ceph-mon[292786]: Reconfiguring osd.3 (monmap changed)... Feb 20 04:44:18 localhost ceph-mon[292786]: Reconfiguring daemon osd.3 on np0005625204.localdomain Feb 20 04:44:18 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:18 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:18 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:18 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:18 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:18 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:18 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:19 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:44:19 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:44:19 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625204.exgrzx (monmap changed)... Feb 20 04:44:19 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625204.exgrzx (monmap changed)... Feb 20 04:44:19 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:44:19 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:19 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 20 04:44:19 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch Feb 20 04:44:19 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:19 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:19 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain Feb 20 04:44:19 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain Feb 20 04:44:19 localhost ceph-mon[292786]: Removed label _admin from host np0005625200.localdomain Feb 20 04:44:19 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)... Feb 20 04:44:19 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain Feb 20 04:44:19 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:19 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:19 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:19 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:20 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:44:20 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:44:20 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005625204 (monmap changed)... Feb 20 04:44:20 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005625204 (monmap changed)... Feb 20 04:44:20 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 20 04:44:20 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:44:20 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 20 04:44:20 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 20 04:44:20 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:20 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:20 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain Feb 20 04:44:20 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain Feb 20 04:44:20 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:20 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)... Feb 20 04:44:20 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain Feb 20 04:44:20 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:20 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:20 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:44:20 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:44:20 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:44:21 localhost ceph-mon[292786]: Reconfiguring mon.np0005625204 (monmap changed)... Feb 20 04:44:21 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain Feb 20 04:44:21 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:21 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:22 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:22 localhost ceph-mon[292786]: mon.np0005625202@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:44:22 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain.devices.0}] v 0) Feb 20 04:44:22 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain}] v 0) Feb 20 04:44:22 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:22 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:22 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:44:22 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:44:22 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Removing np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:22 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removing np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:22 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:22 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:22 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:22 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:22 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:22 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:22 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:22 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:22 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Removing np0005625200.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:44:22 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removing np0005625200.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:44:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:44:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:44:22 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Removing np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:22 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removing np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:22 localhost podman[296780]: 2026-02-20 09:44:22.778124778 +0000 UTC m=+0.089668488 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 20 04:44:22 localhost podman[296780]: 2026-02-20 09:44:22.815763195 +0000 UTC m=+0.127306855 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:44:22 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain.devices.0}] v 0) Feb 20 04:44:22 localhost podman[296779]: 2026-02-20 09:44:22.825127178 +0000 UTC m=+0.138196238 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, version=9.7, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 20 04:44:22 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:44:22 localhost podman[296779]: 2026-02-20 09:44:22.838756641 +0000 UTC m=+0.151825701 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1770267347, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7) Feb 20 04:44:22 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain}] v 0) Feb 20 04:44:22 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:44:23 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:23 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:23 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:23 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:23 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:23 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:23 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:23 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:23 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:23 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:23 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:44:23 localhost ceph-mon[292786]: Removing np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:23 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:23 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:23 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:23 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:23 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:23 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:23 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:23 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:23 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:44:23 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:44:23 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:44:23 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:44:24 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:24 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:24 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:24 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:44:24 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev e7c1119a-8275-4732-b07a-a15ed02d37d1 (Updating mgr deployment (-1 -> 4)) Feb 20 04:44:24 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Removing daemon mgr.np0005625200.ypbkax from np0005625200.localdomain -- ports [8765] Feb 20 04:44:24 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removing daemon mgr.np0005625200.ypbkax from np0005625200.localdomain -- ports [8765] Feb 20 04:44:24 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:44:24 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:44:24 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:44:24 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:44:24 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:44:24 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:44:24 localhost ceph-mon[292786]: Removing np0005625200.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:44:24 localhost ceph-mon[292786]: Removing np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:44:24 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:24 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:24 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:24 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:24 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:24 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:24 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:24 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:24 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:24 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:24 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:24 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:24 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:25 localhost ceph-mon[292786]: Removing daemon mgr.np0005625200.ypbkax from np0005625200.localdomain -- ports [8765] Feb 20 04:44:26 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:26 localhost ceph-mgr[286565]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.np0005625200.ypbkax Feb 20 04:44:26 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removing key for mgr.np0005625200.ypbkax Feb 20 04:44:26 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.np0005625200.ypbkax"} v 0) Feb 20 04:44:26 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "mgr.np0005625200.ypbkax"} : dispatch Feb 20 04:44:26 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Feb 20 04:44:26 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev e7c1119a-8275-4732-b07a-a15ed02d37d1 (Updating mgr deployment (-1 -> 4)) Feb 20 04:44:26 localhost ceph-mgr[286565]: [progress INFO root] Completed event e7c1119a-8275-4732-b07a-a15ed02d37d1 (Updating mgr deployment (-1 -> 4)) in 3 seconds Feb 20 04:44:26 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Feb 20 04:44:26 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 76fcee16-b977-44f1-bcbc-76728d20d457 (Updating node-proxy deployment (+5 -> 5)) Feb 20 04:44:26 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 76fcee16-b977-44f1-bcbc-76728d20d457 (Updating node-proxy deployment (+5 -> 5)) Feb 20 04:44:26 localhost ceph-mgr[286565]: [progress INFO root] Completed event 76fcee16-b977-44f1-bcbc-76728d20d457 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Feb 20 04:44:26 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:44:26 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:44:27 localhost sshd[297115]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:44:27 localhost ceph-mon[292786]: mon.np0005625202@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:44:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:44:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:44:27 localhost podman[297134]: 2026-02-20 09:44:27.371219705 +0000 UTC m=+0.045524003 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:44:27 localhost podman[297135]: 2026-02-20 09:44:27.398885643 +0000 UTC m=+0.066145467 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Feb 20 04:44:27 localhost podman[297135]: 2026-02-20 09:44:27.405346091 +0000 UTC m=+0.072605905 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 20 04:44:27 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:44:27 localhost podman[297134]: 2026-02-20 09:44:27.43577123 +0000 UTC m=+0.110075518 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0) Feb 20 04:44:27 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:44:27 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "mgr.np0005625200.ypbkax"} : dispatch Feb 20 04:44:27 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "mgr.np0005625200.ypbkax"} : dispatch Feb 20 04:44:27 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005625200.ypbkax"}]': finished Feb 20 04:44:27 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:27 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:28 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:28 localhost openstack_network_exporter[243776]: ERROR 09:44:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:44:28 localhost openstack_network_exporter[243776]: Feb 20 04:44:28 localhost openstack_network_exporter[243776]: ERROR 09:44:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:44:28 localhost openstack_network_exporter[243776]: Feb 20 04:44:28 localhost ceph-mon[292786]: Removing key for mgr.np0005625200.ypbkax Feb 20 04:44:28 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain.devices.0}] v 0) Feb 20 04:44:28 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain}] v 0) Feb 20 04:44:28 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:28 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:28 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:44:28 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:44:28 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:44:28 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 43e374cb-0fa3-4be8-aa42-35c2e7421a42 (Updating node-proxy deployment (+5 -> 5)) Feb 20 04:44:28 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 43e374cb-0fa3-4be8-aa42-35c2e7421a42 (Updating node-proxy deployment (+5 -> 5)) Feb 20 04:44:28 localhost ceph-mgr[286565]: [progress INFO root] Completed event 43e374cb-0fa3-4be8-aa42-35c2e7421a42 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Feb 20 04:44:28 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:44:28 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:44:29 localhost sshd[297194]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:44:29 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:44:29 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:44:29 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625200 (monmap changed)... Feb 20 04:44:29 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625200 (monmap changed)... Feb 20 04:44:29 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:44:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:29 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:29 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:29 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain Feb 20 04:44:29 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain Feb 20 04:44:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.27472 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005625200.localdomain", "target": ["mon-mgr", ""]}]: dispatch Feb 20 04:44:29 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 20 04:44:29 localhost ceph-mgr[286565]: [cephadm INFO root] Added label _no_schedule to host np0005625200.localdomain Feb 20 04:44:29 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005625200.localdomain Feb 20 04:44:29 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 20 04:44:29 localhost ceph-mgr[286565]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005625200.localdomain Feb 20 04:44:29 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005625200.localdomain Feb 20 04:44:29 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:29 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:29 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:44:29 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:29 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:29 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:29 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:29 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:29 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.049732) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580670049793, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1288, "num_deletes": 252, "total_data_size": 2102864, "memory_usage": 2147632, "flush_reason": "Manual Compaction"} Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580670059485, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 1203183, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12866, "largest_seqno": 14149, "table_properties": {"data_size": 1197431, "index_size": 2903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15673, "raw_average_key_size": 22, "raw_value_size": 1184691, "raw_average_value_size": 1690, "num_data_blocks": 125, "num_entries": 701, "num_filter_entries": 701, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580645, "oldest_key_time": 1771580645, "file_creation_time": 1771580670, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 9810 microseconds, and 3260 cpu microseconds. Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.059535) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 1203183 bytes OK Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.059562) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.061969) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.061992) EVENT_LOG_v1 {"time_micros": 1771580670061986, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.062018) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2096164, prev total WAL file size 2096164, number of live WAL files 2. Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.063327) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end) Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(1174KB)], [21(16MB)] Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580670063401, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 18889764, "oldest_snapshot_seqno": -1} Feb 20 04:44:30 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10135 keys, 15189210 bytes, temperature: kUnknown Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580670168303, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 15189210, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15132657, "index_size": 30148, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 272377, "raw_average_key_size": 26, "raw_value_size": 14960563, "raw_average_value_size": 1476, "num_data_blocks": 1137, "num_entries": 10135, "num_filter_entries": 10135, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771580670, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.168780) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 15189210 bytes Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.174495) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.8 rd, 144.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 16.9 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(28.3) write-amplify(12.6) OK, records in: 10675, records dropped: 540 output_compression: NoCompression Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.174527) EVENT_LOG_v1 {"time_micros": 1771580670174514, "job": 10, "event": "compaction_finished", "compaction_time_micros": 105031, "compaction_time_cpu_micros": 41192, "output_level": 6, "num_output_files": 1, "total_output_size": 15189210, "num_input_records": 10675, "num_output_records": 10135, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580670174814, "job": 10, "event": "table_file_deletion", "file_number": 23} Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580670177405, "job": 10, "event": "table_file_deletion", "file_number": 21} Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.063219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.177637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.177646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.177649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.177652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:30 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:44:30.177655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:44:30 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain.devices.0}] v 0) Feb 20 04:44:30 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625200.localdomain}] v 0) Feb 20 04:44:30 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005625201 (monmap changed)... Feb 20 04:44:30 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005625201 (monmap changed)... Feb 20 04:44:30 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 20 04:44:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:44:30 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 20 04:44:30 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 20 04:44:30 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:30 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:30 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain Feb 20 04:44:30 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain Feb 20 04:44:30 localhost ceph-mon[292786]: Reconfiguring crash.np0005625200 (monmap changed)... Feb 20 04:44:30 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain Feb 20 04:44:30 localhost ceph-mon[292786]: Added label _no_schedule to host np0005625200.localdomain Feb 20 04:44:30 localhost ceph-mon[292786]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005625200.localdomain Feb 20 04:44:30 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:30 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:30 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:44:31 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.27484 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005625200.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Feb 20 04:44:31 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:44:31 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:44:31 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)... Feb 20 04:44:31 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)... Feb 20 04:44:31 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:44:31 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:31 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 20 04:44:31 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch Feb 20 04:44:31 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:31 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:31 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain Feb 20 04:44:31 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain Feb 20 04:44:31 localhost ceph-mon[292786]: Reconfiguring mon.np0005625201 (monmap changed)... Feb 20 04:44:31 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain Feb 20 04:44:31 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:31 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:31 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:31 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:32 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:32 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:44:32 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:44:32 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625201 (monmap changed)... Feb 20 04:44:32 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625201 (monmap changed)... Feb 20 04:44:32 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:44:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:32 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:32 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:32 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain Feb 20 04:44:32 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain Feb 20 04:44:32 localhost ceph-mon[292786]: mon.np0005625202@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:44:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.34398 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005625200.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch Feb 20 04:44:32 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 20 04:44:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:44:32 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain"} v 0) Feb 20 04:44:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain"} : dispatch Feb 20 04:44:32 localhost ceph-mgr[286565]: [cephadm INFO root] Removed host np0005625200.localdomain Feb 20 04:44:32 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removed host np0005625200.localdomain Feb 20 04:44:32 localhost podman[297196]: 2026-02-20 09:44:32.445440843 +0000 UTC m=+0.082856133 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:44:32 localhost podman[297196]: 2026-02-20 09:44:32.482964576 +0000 UTC m=+0.120379806 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:44:32 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:44:32 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)... Feb 20 04:44:32 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain Feb 20 04:44:32 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:32 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:32 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:32 localhost ceph-mon[292786]: Reconfiguring crash.np0005625201 (monmap changed)... Feb 20 04:44:32 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:32 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain Feb 20 04:44:32 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:32 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain"} : dispatch Feb 20 04:44:32 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain"} : dispatch Feb 20 04:44:32 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain"}]': finished Feb 20 04:44:32 localhost ceph-mon[292786]: Removed host np0005625200.localdomain Feb 20 04:44:33 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:44:33 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:44:33 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:44:33 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:44:33 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:44:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:33 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:33 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:33 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:44:33 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:44:33 localhost podman[297273]: Feb 20 04:44:33 localhost podman[297273]: 2026-02-20 09:44:33.680220754 +0000 UTC m=+0.066553218 container create f241e6db5a173871605435a2c71a10b15e3948108f78c8441bd8a48037aa22d0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_goldberg, GIT_CLEAN=True, io.buildah.version=1.42.2, io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, ceph=True, RELEASE=main, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Feb 20 04:44:33 localhost systemd[1]: Started libpod-conmon-f241e6db5a173871605435a2c71a10b15e3948108f78c8441bd8a48037aa22d0.scope. Feb 20 04:44:33 localhost systemd[1]: Started libcrun container. Feb 20 04:44:33 localhost podman[297273]: 2026-02-20 09:44:33.750417816 +0000 UTC m=+0.136750290 container init f241e6db5a173871605435a2c71a10b15e3948108f78c8441bd8a48037aa22d0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_goldberg, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, CEPH_POINT_RELEASE=, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-type=git, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7) Feb 20 04:44:33 localhost podman[297273]: 2026-02-20 09:44:33.655920054 +0000 UTC m=+0.042252538 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:33 localhost podman[297273]: 2026-02-20 09:44:33.761296069 +0000 UTC m=+0.147628523 container start f241e6db5a173871605435a2c71a10b15e3948108f78c8441bd8a48037aa22d0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_goldberg, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph) Feb 20 04:44:33 localhost podman[297273]: 2026-02-20 09:44:33.761812953 +0000 UTC m=+0.148145407 container attach f241e6db5a173871605435a2c71a10b15e3948108f78c8441bd8a48037aa22d0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_goldberg, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, name=rhceph, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, com.redhat.component=rhceph-container, release=1770267347, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:44:33 localhost suspicious_goldberg[297288]: 167 167 Feb 20 04:44:33 localhost systemd[1]: libpod-f241e6db5a173871605435a2c71a10b15e3948108f78c8441bd8a48037aa22d0.scope: Deactivated successfully. Feb 20 04:44:33 localhost podman[297273]: 2026-02-20 09:44:33.766301059 +0000 UTC m=+0.152633543 container died f241e6db5a173871605435a2c71a10b15e3948108f78c8441bd8a48037aa22d0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_goldberg, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, name=rhceph, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_CLEAN=True) Feb 20 04:44:33 localhost systemd[1]: var-lib-containers-storage-overlay-d8e4fe59b6e579e9a2aeb5455be483586bdf63e1ec6806341e62e159b4e29183-merged.mount: Deactivated successfully. Feb 20 04:44:33 localhost podman[297293]: 2026-02-20 09:44:33.872354652 +0000 UTC m=+0.093503588 container remove f241e6db5a173871605435a2c71a10b15e3948108f78c8441bd8a48037aa22d0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_goldberg, release=1770267347, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.42.2, RELEASE=main, ceph=True, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Feb 20 04:44:33 localhost systemd[1]: libpod-conmon-f241e6db5a173871605435a2c71a10b15e3948108f78c8441bd8a48037aa22d0.scope: Deactivated successfully. Feb 20 04:44:33 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:33 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:33 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Feb 20 04:44:33 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Feb 20 04:44:33 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Feb 20 04:44:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:44:33 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:33 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:33 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:44:33 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:44:34 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:34 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:34 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:34 localhost ceph-mon[292786]: Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:44:34 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:34 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:44:34 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:34 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:34 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:44:34 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:34 localhost podman[297364]: Feb 20 04:44:34 localhost podman[297364]: 2026-02-20 09:44:34.579288012 +0000 UTC m=+0.081201299 container create 91536287c3106bdd2411f7b12d59a9bd65fa8764153b2e849fc768b27cf1165e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_gauss, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, version=7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:44:34 localhost systemd[1]: Started libpod-conmon-91536287c3106bdd2411f7b12d59a9bd65fa8764153b2e849fc768b27cf1165e.scope. Feb 20 04:44:34 localhost systemd[1]: Started libcrun container. Feb 20 04:44:34 localhost podman[297364]: 2026-02-20 09:44:34.642340489 +0000 UTC m=+0.144253796 container init 91536287c3106bdd2411f7b12d59a9bd65fa8764153b2e849fc768b27cf1165e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_gauss, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , release=1770267347, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, RELEASE=main, version=7, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:44:34 localhost podman[297364]: 2026-02-20 09:44:34.547843486 +0000 UTC m=+0.049756773 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:34 localhost podman[297364]: 2026-02-20 09:44:34.651516297 +0000 UTC m=+0.153429584 container start 91536287c3106bdd2411f7b12d59a9bd65fa8764153b2e849fc768b27cf1165e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_gauss, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, release=1770267347, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, ceph=True, name=rhceph, GIT_CLEAN=True, vcs-type=git, version=7, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:44:34 localhost podman[297364]: 2026-02-20 09:44:34.651800144 +0000 UTC m=+0.153713471 container attach 91536287c3106bdd2411f7b12d59a9bd65fa8764153b2e849fc768b27cf1165e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_gauss, io.openshift.expose-services=, name=rhceph, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=) Feb 20 04:44:34 localhost kind_gauss[297379]: 167 167 Feb 20 04:44:34 localhost systemd[1]: libpod-91536287c3106bdd2411f7b12d59a9bd65fa8764153b2e849fc768b27cf1165e.scope: Deactivated successfully. Feb 20 04:44:34 localhost podman[297364]: 2026-02-20 09:44:34.657590695 +0000 UTC m=+0.159503992 container died 91536287c3106bdd2411f7b12d59a9bd65fa8764153b2e849fc768b27cf1165e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_gauss, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, GIT_CLEAN=True, version=7, vendor=Red Hat, Inc., release=1770267347, vcs-type=git, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:44:34 localhost systemd[1]: tmp-crun.ldWJXr.mount: Deactivated successfully. Feb 20 04:44:34 localhost systemd[1]: var-lib-containers-storage-overlay-ca7f96ca525890e6ebf0489cb2fa8ba3589dc1bfb4a2b75bffc20d005cc27147-merged.mount: Deactivated successfully. Feb 20 04:44:34 localhost podman[297385]: 2026-02-20 09:44:34.755553867 +0000 UTC m=+0.087449020 container remove 91536287c3106bdd2411f7b12d59a9bd65fa8764153b2e849fc768b27cf1165e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_gauss, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, release=1770267347, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:44:34 localhost systemd[1]: libpod-conmon-91536287c3106bdd2411f7b12d59a9bd65fa8764153b2e849fc768b27cf1165e.scope: Deactivated successfully. Feb 20 04:44:34 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:34 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:34 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Feb 20 04:44:34 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Feb 20 04:44:34 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Feb 20 04:44:34 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:44:34 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:34 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:34 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:44:34 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:44:35 localhost ceph-mon[292786]: Reconfiguring osd.2 (monmap changed)... Feb 20 04:44:35 localhost ceph-mon[292786]: Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:44:35 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:35 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:35 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:44:35 localhost podman[297463]: Feb 20 04:44:35 localhost podman[297463]: 2026-02-20 09:44:35.566858088 +0000 UTC m=+0.078811177 container create 178044ba3b1048d08c1c7e2f4aa303538c5a0516f9c046e9d46b4da216cf163f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_lichterman, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, CEPH_POINT_RELEASE=, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:44:35 localhost systemd[1]: Started libpod-conmon-178044ba3b1048d08c1c7e2f4aa303538c5a0516f9c046e9d46b4da216cf163f.scope. Feb 20 04:44:35 localhost systemd[1]: Started libcrun container. Feb 20 04:44:35 localhost podman[297463]: 2026-02-20 09:44:35.62937813 +0000 UTC m=+0.141331219 container init 178044ba3b1048d08c1c7e2f4aa303538c5a0516f9c046e9d46b4da216cf163f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_lichterman, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:44:35 localhost podman[297463]: 2026-02-20 09:44:35.535100464 +0000 UTC m=+0.047053593 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:35 localhost zealous_lichterman[297478]: 167 167 Feb 20 04:44:35 localhost podman[297463]: 2026-02-20 09:44:35.638462016 +0000 UTC m=+0.150415105 container start 178044ba3b1048d08c1c7e2f4aa303538c5a0516f9c046e9d46b4da216cf163f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_lichterman, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.buildah.version=1.42.2, vcs-type=git, RELEASE=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:44:35 localhost systemd[1]: libpod-178044ba3b1048d08c1c7e2f4aa303538c5a0516f9c046e9d46b4da216cf163f.scope: Deactivated successfully. Feb 20 04:44:35 localhost podman[297463]: 2026-02-20 09:44:35.641632469 +0000 UTC m=+0.153585558 container attach 178044ba3b1048d08c1c7e2f4aa303538c5a0516f9c046e9d46b4da216cf163f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_lichterman, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.buildah.version=1.42.2, RELEASE=main, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=) Feb 20 04:44:35 localhost podman[297463]: 2026-02-20 09:44:35.645117219 +0000 UTC m=+0.157070308 container died 178044ba3b1048d08c1c7e2f4aa303538c5a0516f9c046e9d46b4da216cf163f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_lichterman, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, maintainer=Guillaume Abrioux , io.openshift.expose-services=, distribution-scope=public, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, version=7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:44:35 localhost systemd[1]: var-lib-containers-storage-overlay-b38273c0d2d40fbf986a4599cfac4508f74bac6cc46bcdec2dd0d31dcf92c217-merged.mount: Deactivated successfully. Feb 20 04:44:35 localhost podman[297484]: 2026-02-20 09:44:35.74529298 +0000 UTC m=+0.094414022 container remove 178044ba3b1048d08c1c7e2f4aa303538c5a0516f9c046e9d46b4da216cf163f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_lichterman, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:44:35 localhost systemd[1]: libpod-conmon-178044ba3b1048d08c1c7e2f4aa303538c5a0516f9c046e9d46b4da216cf163f.scope: Deactivated successfully. Feb 20 04:44:35 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:35 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:35 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:44:35 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:44:35 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:44:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:35 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:35 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:35 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:44:35 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:44:36 localhost ceph-mon[292786]: Reconfiguring osd.5 (monmap changed)... Feb 20 04:44:36 localhost ceph-mon[292786]: Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:44:36 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:36 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:36 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:36 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:36 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:36 localhost podman[297559]: Feb 20 04:44:36 localhost podman[297559]: 2026-02-20 09:44:36.644675696 +0000 UTC m=+0.070342888 container create a37d87036c6825f33340c91be37fab325b867736172749a1bbb602a73e99276f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_snyder, ceph=True, RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.buildah.version=1.42.2, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , name=rhceph, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=) Feb 20 04:44:36 localhost systemd[1]: Started libpod-conmon-a37d87036c6825f33340c91be37fab325b867736172749a1bbb602a73e99276f.scope. Feb 20 04:44:36 localhost systemd[1]: Started libcrun container. Feb 20 04:44:36 localhost podman[297559]: 2026-02-20 09:44:36.707745343 +0000 UTC m=+0.133412555 container init a37d87036c6825f33340c91be37fab325b867736172749a1bbb602a73e99276f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_snyder, ceph=True, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main) Feb 20 04:44:36 localhost romantic_snyder[297575]: 167 167 Feb 20 04:44:36 localhost systemd[1]: libpod-a37d87036c6825f33340c91be37fab325b867736172749a1bbb602a73e99276f.scope: Deactivated successfully. Feb 20 04:44:36 localhost podman[297559]: 2026-02-20 09:44:36.718977424 +0000 UTC m=+0.144644646 container start a37d87036c6825f33340c91be37fab325b867736172749a1bbb602a73e99276f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_snyder, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, version=7, CEPH_POINT_RELEASE=) Feb 20 04:44:36 localhost podman[297559]: 2026-02-20 09:44:36.719262921 +0000 UTC m=+0.144930183 container attach a37d87036c6825f33340c91be37fab325b867736172749a1bbb602a73e99276f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_snyder, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, maintainer=Guillaume Abrioux , RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, release=1770267347, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:44:36 localhost podman[297559]: 2026-02-20 09:44:36.62137642 +0000 UTC m=+0.047043632 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:36 localhost podman[297559]: 2026-02-20 09:44:36.722002493 +0000 UTC m=+0.147669735 container died a37d87036c6825f33340c91be37fab325b867736172749a1bbb602a73e99276f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_snyder, release=1770267347, name=rhceph, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph) Feb 20 04:44:36 localhost systemd[1]: var-lib-containers-storage-overlay-06c8dc09e9d4888a740dfcc8ed14931c8e9a62216eeb88985319f02df201ef38-merged.mount: Deactivated successfully. Feb 20 04:44:36 localhost podman[297580]: 2026-02-20 09:44:36.815372697 +0000 UTC m=+0.082600726 container remove a37d87036c6825f33340c91be37fab325b867736172749a1bbb602a73e99276f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_snyder, ceph=True, distribution-scope=public, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, RELEASE=main, name=rhceph, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:44:36 localhost systemd[1]: libpod-conmon-a37d87036c6825f33340c91be37fab325b867736172749a1bbb602a73e99276f.scope: Deactivated successfully. Feb 20 04:44:36 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:36 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:36 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:44:36 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:44:36 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:44:36 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:36 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 20 04:44:36 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch Feb 20 04:44:36 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:36 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:36 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:44:36 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:44:37 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:44:37 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:44:37 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:37 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:37 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:37 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:37 localhost ceph-mon[292786]: mon.np0005625202@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:44:37 localhost podman[297651]: Feb 20 04:44:37 localhost podman[297651]: 2026-02-20 09:44:37.52203805 +0000 UTC m=+0.077479552 container create 1ba91ec180b26d6c832f3d318a856b9a07d5c4bdeb1d0ecfb673b17606bbc460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_blackburn, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, name=rhceph, ceph=True) Feb 20 04:44:37 localhost systemd[1]: Started libpod-conmon-1ba91ec180b26d6c832f3d318a856b9a07d5c4bdeb1d0ecfb673b17606bbc460.scope. Feb 20 04:44:37 localhost systemd[1]: Started libcrun container. Feb 20 04:44:37 localhost podman[297651]: 2026-02-20 09:44:37.590198989 +0000 UTC m=+0.145640491 container init 1ba91ec180b26d6c832f3d318a856b9a07d5c4bdeb1d0ecfb673b17606bbc460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_blackburn, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, distribution-scope=public, maintainer=Guillaume Abrioux , release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main) Feb 20 04:44:37 localhost podman[297651]: 2026-02-20 09:44:37.49080367 +0000 UTC m=+0.046245192 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:37 localhost distracted_blackburn[297666]: 167 167 Feb 20 04:44:37 localhost systemd[1]: libpod-1ba91ec180b26d6c832f3d318a856b9a07d5c4bdeb1d0ecfb673b17606bbc460.scope: Deactivated successfully. Feb 20 04:44:37 localhost podman[297651]: 2026-02-20 09:44:37.599629334 +0000 UTC m=+0.155070836 container start 1ba91ec180b26d6c832f3d318a856b9a07d5c4bdeb1d0ecfb673b17606bbc460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_blackburn, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, io.buildah.version=1.42.2, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, release=1770267347, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 04:44:37 localhost podman[297651]: 2026-02-20 09:44:37.600034205 +0000 UTC m=+0.155475757 container attach 1ba91ec180b26d6c832f3d318a856b9a07d5c4bdeb1d0ecfb673b17606bbc460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_blackburn, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.buildah.version=1.42.2, name=rhceph, release=1770267347, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main) Feb 20 04:44:37 localhost podman[297651]: 2026-02-20 09:44:37.602957931 +0000 UTC m=+0.158399463 container died 1ba91ec180b26d6c832f3d318a856b9a07d5c4bdeb1d0ecfb673b17606bbc460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_blackburn, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, io.openshift.expose-services=, distribution-scope=public, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, release=1770267347, io.buildah.version=1.42.2) Feb 20 04:44:37 localhost systemd[1]: var-lib-containers-storage-overlay-9e24d947659ae8fe3e1df56d900f20c67a5b7daee7c55ef94c5a9004200863f5-merged.mount: Deactivated successfully. Feb 20 04:44:37 localhost podman[297671]: 2026-02-20 09:44:37.697970897 +0000 UTC m=+0.089395361 container remove 1ba91ec180b26d6c832f3d318a856b9a07d5c4bdeb1d0ecfb673b17606bbc460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_blackburn, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, ceph=True, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, release=1770267347) Feb 20 04:44:37 localhost systemd[1]: libpod-conmon-1ba91ec180b26d6c832f3d318a856b9a07d5c4bdeb1d0ecfb673b17606bbc460.scope: Deactivated successfully. Feb 20 04:44:37 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:37 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:37 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005625202 (monmap changed)... Feb 20 04:44:37 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005625202 (monmap changed)... Feb 20 04:44:37 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 20 04:44:37 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:44:37 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 20 04:44:37 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 20 04:44:37 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:37 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:37 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:44:37 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:44:38 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:44:38 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:44:38 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:38 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:38 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:44:38 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:38 localhost podman[297740]: Feb 20 04:44:38 localhost podman[297740]: 2026-02-20 09:44:38.378560574 +0000 UTC m=+0.077490643 container create 07aebd4154bf44f2f13ae5dddf557ba9a3039bb34d45a01d558a35843b6474a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_turing, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, architecture=x86_64, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , ceph=True, vcs-type=git, RELEASE=main, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, version=7, description=Red Hat Ceph Storage 7, release=1770267347, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph) Feb 20 04:44:38 localhost sshd[297754]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:44:38 localhost systemd[1]: Started libpod-conmon-07aebd4154bf44f2f13ae5dddf557ba9a3039bb34d45a01d558a35843b6474a6.scope. Feb 20 04:44:38 localhost systemd[1]: Started libcrun container. Feb 20 04:44:38 localhost podman[297740]: 2026-02-20 09:44:38.442115713 +0000 UTC m=+0.141045792 container init 07aebd4154bf44f2f13ae5dddf557ba9a3039bb34d45a01d558a35843b6474a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_turing, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, ceph=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Feb 20 04:44:38 localhost podman[297740]: 2026-02-20 09:44:38.347018135 +0000 UTC m=+0.045948234 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:38 localhost podman[297740]: 2026-02-20 09:44:38.451490497 +0000 UTC m=+0.150420576 container start 07aebd4154bf44f2f13ae5dddf557ba9a3039bb34d45a01d558a35843b6474a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_turing, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=rhceph ceph, vcs-type=git, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_BRANCH=main, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:44:38 localhost clever_turing[297757]: 167 167 Feb 20 04:44:38 localhost podman[297740]: 2026-02-20 09:44:38.453452517 +0000 UTC m=+0.152382636 container attach 07aebd4154bf44f2f13ae5dddf557ba9a3039bb34d45a01d558a35843b6474a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_turing, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, architecture=x86_64, CEPH_POINT_RELEASE=, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container) Feb 20 04:44:38 localhost systemd[1]: libpod-07aebd4154bf44f2f13ae5dddf557ba9a3039bb34d45a01d558a35843b6474a6.scope: Deactivated successfully. Feb 20 04:44:38 localhost podman[297740]: 2026-02-20 09:44:38.456505006 +0000 UTC m=+0.155435125 container died 07aebd4154bf44f2f13ae5dddf557ba9a3039bb34d45a01d558a35843b6474a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_turing, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Feb 20 04:44:38 localhost podman[297762]: 2026-02-20 09:44:38.551855842 +0000 UTC m=+0.084869604 container remove 07aebd4154bf44f2f13ae5dddf557ba9a3039bb34d45a01d558a35843b6474a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_turing, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, ceph=True, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vendor=Red Hat, Inc., release=1770267347, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Feb 20 04:44:38 localhost systemd[1]: libpod-conmon-07aebd4154bf44f2f13ae5dddf557ba9a3039bb34d45a01d558a35843b6474a6.scope: Deactivated successfully. Feb 20 04:44:38 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:38 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:38 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:44:38 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:44:38 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:44:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:38 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:38 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:38 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:44:38 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:44:38 localhost systemd[1]: var-lib-containers-storage-overlay-ca05601b4f3fbb6b26c4bc5614f14a9b54274c9a32d2577266efe7ebe453a292-merged.mount: Deactivated successfully. Feb 20 04:44:39 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:39 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:39 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Feb 20 04:44:39 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Feb 20 04:44:39 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Feb 20 04:44:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:44:39 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:39 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:39 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:44:39 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:44:39 localhost ceph-mon[292786]: Reconfiguring mon.np0005625202 (monmap changed)... Feb 20 04:44:39 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:44:39 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:39 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:39 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:39 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:39 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:39 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:39 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:44:40 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:40 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:40 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:40 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Feb 20 04:44:40 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Feb 20 04:44:40 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Feb 20 04:44:40 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:44:40 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:40 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:40 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:44:40 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:44:40 localhost ceph-mon[292786]: Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:44:40 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:44:40 localhost ceph-mon[292786]: Reconfiguring osd.1 (monmap changed)... Feb 20 04:44:40 localhost ceph-mon[292786]: Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:44:40 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:40 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:40 localhost ceph-mon[292786]: Reconfiguring osd.4 (monmap changed)... Feb 20 04:44:40 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:44:40 localhost ceph-mon[292786]: Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:44:40 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.27496 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Feb 20 04:44:40 localhost ceph-mgr[286565]: [cephadm INFO root] Saving service mon spec with placement label:mon Feb 20 04:44:40 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Feb 20 04:44:40 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 20 04:44:41 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:41 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:41 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:41 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:41 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:44:41 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:44:41 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:44:41 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 7e28afb5-016c-46a8-b06e-3da389b43663 (Updating node-proxy deployment (+4 -> 4)) Feb 20 04:44:41 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 7e28afb5-016c-46a8-b06e-3da389b43663 (Updating node-proxy deployment (+4 -> 4)) Feb 20 04:44:41 localhost ceph-mgr[286565]: [progress INFO root] Completed event 7e28afb5-016c-46a8-b06e-3da389b43663 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Feb 20 04:44:41 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:44:41 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:44:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:44:41 localhost ceph-mon[292786]: Saving service mon spec with placement label:mon Feb 20 04:44:41 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:41 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:41 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:41 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:44:41 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:41 localhost podman[297796]: 2026-02-20 09:44:41.904456888 +0000 UTC m=+0.077294757 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:44:41 localhost podman[297796]: 2026-02-20 09:44:41.94386403 +0000 UTC m=+0.116701869 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:44:41 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:44:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.27504 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625203", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Feb 20 04:44:42 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:42 localhost ceph-mon[292786]: mon.np0005625202@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:44:43 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.27508 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005625203"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Feb 20 04:44:43 localhost ceph-mgr[286565]: [cephadm INFO root] Remove daemons mon.np0005625203 Feb 20 04:44:43 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005625203 Feb 20 04:44:43 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "quorum_status"} v 0) Feb 20 04:44:43 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "quorum_status"} : dispatch Feb 20 04:44:43 localhost ceph-mgr[286565]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005625203: new quorum should be ['np0005625201', 'np0005625204', 'np0005625202'] (from ['np0005625201', 'np0005625204', 'np0005625202']) Feb 20 04:44:43 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005625203: new quorum should be ['np0005625201', 'np0005625204', 'np0005625202'] (from ['np0005625201', 'np0005625204', 'np0005625202']) Feb 20 04:44:43 localhost ceph-mgr[286565]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005625203 from monmap... Feb 20 04:44:43 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removing monitor np0005625203 from monmap... Feb 20 04:44:43 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e10 handle_command mon_command({"prefix": "mon rm", "name": "np0005625203"} v 0) Feb 20 04:44:43 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon rm", "name": "np0005625203"} : dispatch Feb 20 04:44:43 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005625203 from np0005625203.localdomain -- ports [] Feb 20 04:44:43 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005625203 from np0005625203.localdomain -- ports [] Feb 20 04:44:43 localhost ceph-mon[292786]: mon.np0005625202@3(peon) e11 my rank is now 2 (was 3) Feb 20 04:44:43 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 20 04:44:43 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 20 04:44:43 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : mon.np0005625202 calling monitor election Feb 20 04:44:43 localhost ceph-mon[292786]: paxos.2).electionLogic(40) init, last seen epoch 40 Feb 20 04:44:43 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:44:43 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:44:43 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:44:43 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625202"} v 0) Feb 20 04:44:43 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch Feb 20 04:44:43 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0) Feb 20 04:44:43 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch Feb 20 04:44:43 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:43 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:44 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:44 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:44:44 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:44:45 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 handle_auth_request failed to assign global_id Feb 20 04:44:46 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 handle_auth_request failed to assign global_id Feb 20 04:44:46 localhost podman[241347]: time="2026-02-20T09:44:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:44:46 localhost podman[241347]: @ - - [20/Feb/2026:09:44:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:44:46 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:46 localhost podman[241347]: @ - - [20/Feb/2026:09:44:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18756 "" "Go-http-client/1.1" Feb 20 04:44:46 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 handle_auth_request failed to assign global_id Feb 20 04:44:47 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 handle_auth_request failed to assign global_id Feb 20 04:44:47 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 handle_auth_request failed to assign global_id Feb 20 04:44:47 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 handle_auth_request failed to assign global_id Feb 20 04:44:48 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 handle_auth_request failed to assign global_id Feb 20 04:44:48 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:48 localhost ceph-mon[292786]: paxos.2).electionLogic(41) init, last seen epoch 41, mid-election, bumping Feb 20 04:44:48 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:44:48 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:44:48 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:44:48 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:44:48 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:44:48 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:44:48 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:48 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:48 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:48 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:48 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:48 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:48 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:48 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:48 localhost nova_compute[280804]: 2026-02-20 09:44:48.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:44:48 localhost nova_compute[280804]: 2026-02-20 09:44:48.537 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:44:48 localhost nova_compute[280804]: 2026-02-20 09:44:48.538 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:44:48 localhost nova_compute[280804]: 2026-02-20 09:44:48.538 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:44:48 localhost nova_compute[280804]: 2026-02-20 09:44:48.538 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:44:48 localhost nova_compute[280804]: 2026-02-20 09:44:48.539 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:44:48 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:44:48 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1442551253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.018 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:44:49 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:49 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:49 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:49 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:49 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:49 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:49 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:49 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.184 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.185 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11990MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.186 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.186 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.299 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.300 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.352 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:44:49 localhost ceph-mon[292786]: Remove daemons mon.np0005625203 Feb 20 04:44:49 localhost ceph-mon[292786]: Safe to remove mon.np0005625203: new quorum should be ['np0005625201', 'np0005625204', 'np0005625202'] (from ['np0005625201', 'np0005625204', 'np0005625202']) Feb 20 04:44:49 localhost ceph-mon[292786]: Removing monitor np0005625203 from monmap... Feb 20 04:44:49 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon rm", "name": "np0005625203"} : dispatch Feb 20 04:44:49 localhost ceph-mon[292786]: Removing daemon mon.np0005625203 from np0005625203.localdomain -- ports [] Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625204 calling monitor election Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625202 calling monitor election Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625204 calling monitor election Feb 20 04:44:49 localhost ceph-mon[292786]: Health check failed: 1/3 mons down, quorum np0005625201,np0005625204 (MON_DOWN) Feb 20 04:44:49 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625201 calling monitor election Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625201 is new leader, mons np0005625201,np0005625204,np0005625202 in quorum (ranks 0,1,2) Feb 20 04:44:49 localhost ceph-mon[292786]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005625201,np0005625204) Feb 20 04:44:49 localhost ceph-mon[292786]: Cluster is now healthy Feb 20 04:44:49 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:44:49 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:49 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:49 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:49 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:44:49 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:44:49 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:44:49 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4195120872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.808 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.814 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.833 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.836 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:44:49 localhost nova_compute[280804]: 2026-02-20 09:44:49.836 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:44:49 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 41171d5b-b4d9-4f97-be7f-706d4172e5cb (Updating node-proxy deployment (+4 -> 4)) Feb 20 04:44:49 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 41171d5b-b4d9-4f97-be7f-706d4172e5cb (Updating node-proxy deployment (+4 -> 4)) Feb 20 04:44:49 localhost ceph-mgr[286565]: [progress INFO root] Completed event 41171d5b-b4d9-4f97-be7f-706d4172e5cb (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Feb 20 04:44:49 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:44:49 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:44:50 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:50 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)... Feb 20 04:44:50 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)... Feb 20 04:44:50 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:44:50 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:50 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 20 04:44:50 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch Feb 20 04:44:50 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:50 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:50 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain Feb 20 04:44:50 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain Feb 20 04:44:50 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:50 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:50 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:50 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:44:50 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:50 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:50 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:50 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:50 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:50 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:50 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:50 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:50 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:50 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:50 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:50 localhost nova_compute[280804]: 2026-02-20 09:44:50.840 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:44:50 localhost nova_compute[280804]: 2026-02-20 09:44:50.842 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:44:50 localhost nova_compute[280804]: 2026-02-20 09:44:50.842 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:44:50 localhost nova_compute[280804]: 2026-02-20 09:44:50.862 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:44:50 localhost nova_compute[280804]: 2026-02-20 09:44:50.862 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:44:50 localhost nova_compute[280804]: 2026-02-20 09:44:50.863 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:44:50 localhost nova_compute[280804]: 2026-02-20 09:44:50.863 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:44:50 localhost nova_compute[280804]: 2026-02-20 09:44:50.864 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:44:51 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:44:51 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:44:51 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625201 (monmap changed)... Feb 20 04:44:51 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625201 (monmap changed)... Feb 20 04:44:51 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:44:51 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:51 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:51 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:51 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain Feb 20 04:44:51 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain Feb 20 04:44:51 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)... Feb 20 04:44:51 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain Feb 20 04:44:51 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:51 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:51 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:51 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:51 localhost nova_compute[280804]: 2026-02-20 09:44:51.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:44:51 localhost nova_compute[280804]: 2026-02-20 09:44:51.512 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:44:52 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:44:52 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:44:52 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:44:52 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:44:52 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:44:52 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:52 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:52 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:52 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:44:52 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:44:52 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:52 localhost ceph-mon[292786]: mon.np0005625202@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:44:52 localhost ceph-mon[292786]: Reconfiguring crash.np0005625201 (monmap changed)... Feb 20 04:44:52 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain Feb 20 04:44:52 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:52 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:52 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:52 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:52 localhost nova_compute[280804]: 2026-02-20 09:44:52.512 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:44:52 localhost podman[298254]: Feb 20 04:44:52 localhost podman[298254]: 2026-02-20 09:44:52.743422204 +0000 UTC m=+0.070907711 container create 097523b1dd1bf3bf6ba643c0a7e7d4cfc650f38b747b48d1b1558d75eff40b44 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_banzai, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1770267347, GIT_BRANCH=main, distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, RELEASE=main) Feb 20 04:44:52 localhost systemd[1]: Started libpod-conmon-097523b1dd1bf3bf6ba643c0a7e7d4cfc650f38b747b48d1b1558d75eff40b44.scope. Feb 20 04:44:52 localhost systemd[1]: Started libcrun container. Feb 20 04:44:52 localhost podman[298254]: 2026-02-20 09:44:52.799754676 +0000 UTC m=+0.127240193 container init 097523b1dd1bf3bf6ba643c0a7e7d4cfc650f38b747b48d1b1558d75eff40b44 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_banzai, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, release=1770267347, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, ceph=True, name=rhceph, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, version=7, GIT_BRANCH=main) Feb 20 04:44:52 localhost systemd[1]: tmp-crun.fBaZ4v.mount: Deactivated successfully. Feb 20 04:44:52 localhost podman[298254]: 2026-02-20 09:44:52.810800343 +0000 UTC m=+0.138285820 container start 097523b1dd1bf3bf6ba643c0a7e7d4cfc650f38b747b48d1b1558d75eff40b44 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_banzai, RELEASE=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347) Feb 20 04:44:52 localhost podman[298254]: 2026-02-20 09:44:52.811013298 +0000 UTC m=+0.138498845 container attach 097523b1dd1bf3bf6ba643c0a7e7d4cfc650f38b747b48d1b1558d75eff40b44 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_banzai, description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., ceph=True, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , release=1770267347, name=rhceph) Feb 20 04:44:52 localhost podman[298254]: 2026-02-20 09:44:52.712084181 +0000 UTC m=+0.039569698 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:52 localhost flamboyant_banzai[298269]: 167 167 Feb 20 04:44:52 localhost systemd[1]: libpod-097523b1dd1bf3bf6ba643c0a7e7d4cfc650f38b747b48d1b1558d75eff40b44.scope: Deactivated successfully. Feb 20 04:44:52 localhost podman[298254]: 2026-02-20 09:44:52.813330979 +0000 UTC m=+0.140816496 container died 097523b1dd1bf3bf6ba643c0a7e7d4cfc650f38b747b48d1b1558d75eff40b44 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_banzai, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., ceph=True, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, vcs-type=git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, GIT_CLEAN=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:44:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:44:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:44:52 localhost podman[298274]: 2026-02-20 09:44:52.908566501 +0000 UTC m=+0.090139701 container remove 097523b1dd1bf3bf6ba643c0a7e7d4cfc650f38b747b48d1b1558d75eff40b44 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_banzai, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, ceph=True, GIT_BRANCH=main, release=1770267347, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc.) Feb 20 04:44:52 localhost systemd[1]: libpod-conmon-097523b1dd1bf3bf6ba643c0a7e7d4cfc650f38b747b48d1b1558d75eff40b44.scope: Deactivated successfully. Feb 20 04:44:52 localhost sshd[298311]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:44:52 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:52 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:53 localhost podman[298287]: 2026-02-20 09:44:53.004351487 +0000 UTC m=+0.111523265 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute) Feb 20 04:44:53 localhost podman[298286]: 2026-02-20 09:44:52.98326618 +0000 UTC m=+0.090325905 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, version=9.7, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., vcs-type=git, release=1770267347, io.buildah.version=1.33.7) Feb 20 04:44:53 localhost podman[298286]: 2026-02-20 09:44:53.062664711 +0000 UTC m=+0.169724376 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, architecture=x86_64, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 04:44:53 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:44:53 localhost podman[298287]: 2026-02-20 09:44:53.086278554 +0000 UTC m=+0.193450372 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:44:53 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:44:53 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:44:53 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:44:53 localhost nova_compute[280804]: 2026-02-20 09:44:53.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:44:53 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Feb 20 04:44:53 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Feb 20 04:44:53 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Feb 20 04:44:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:44:53 localhost ceph-mon[292786]: Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:44:53 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:44:53 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:53 localhost systemd[1]: var-lib-containers-storage-overlay-fea2b1a93d472770f1fd349ef819792264c2550355b27bf097a26a6dd38b7f02-merged.mount: Deactivated successfully. Feb 20 04:44:53 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:53 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:53 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:44:53 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:44:54 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:54 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:44:54 Feb 20 04:44:54 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:44:54 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 04:44:54 localhost ceph-mgr[286565]: [balancer INFO root] pools ['vms', 'volumes', 'backups', 'manila_data', 'images', 'manila_metadata', '.mgr'] Feb 20 04:44:54 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:44:54 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.1810441094360693e-06 of space, bias 4.0, pg target 0.001741927228736274 quantized to 16 (current 16) Feb 20 04:44:54 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:44:54 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:44:54 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:44:54 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:44:54 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:44:54 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:44:54 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:44:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:44:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:44:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:44:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:44:54 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:44:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:44:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:44:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:44:54 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:44:54 localhost podman[298384]: Feb 20 04:44:54 localhost podman[298384]: 2026-02-20 09:44:54.414989955 +0000 UTC m=+0.080898671 container create 23d89396690df17beb8e325711949b6a5660ef9edca265f41ad89a818117e72b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_heisenberg, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, RELEASE=main, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, release=1770267347, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7) Feb 20 04:44:54 localhost systemd[1]: Started libpod-conmon-23d89396690df17beb8e325711949b6a5660ef9edca265f41ad89a818117e72b.scope. Feb 20 04:44:54 localhost systemd[1]: Started libcrun container. Feb 20 04:44:54 localhost podman[298384]: 2026-02-20 09:44:54.383014465 +0000 UTC m=+0.048923201 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:54 localhost podman[298384]: 2026-02-20 09:44:54.48341263 +0000 UTC m=+0.149321346 container init 23d89396690df17beb8e325711949b6a5660ef9edca265f41ad89a818117e72b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_heisenberg, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, vendor=Red Hat, Inc., release=1770267347, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_BRANCH=main) Feb 20 04:44:54 localhost podman[298384]: 2026-02-20 09:44:54.493986185 +0000 UTC m=+0.159894911 container start 23d89396690df17beb8e325711949b6a5660ef9edca265f41ad89a818117e72b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_heisenberg, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, description=Red Hat Ceph Storage 7) Feb 20 04:44:54 localhost podman[298384]: 2026-02-20 09:44:54.494417886 +0000 UTC m=+0.160326602 container attach 23d89396690df17beb8e325711949b6a5660ef9edca265f41ad89a818117e72b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_heisenberg, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, vcs-type=git) Feb 20 04:44:54 localhost elated_heisenberg[298399]: 167 167 Feb 20 04:44:54 localhost systemd[1]: libpod-23d89396690df17beb8e325711949b6a5660ef9edca265f41ad89a818117e72b.scope: Deactivated successfully. Feb 20 04:44:54 localhost podman[298384]: 2026-02-20 09:44:54.498450321 +0000 UTC m=+0.164359077 container died 23d89396690df17beb8e325711949b6a5660ef9edca265f41ad89a818117e72b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_heisenberg, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, ceph=True, io.openshift.expose-services=, GIT_CLEAN=True, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, version=7, release=1770267347) Feb 20 04:44:54 localhost podman[298404]: 2026-02-20 09:44:54.598144399 +0000 UTC m=+0.087263986 container remove 23d89396690df17beb8e325711949b6a5660ef9edca265f41ad89a818117e72b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_heisenberg, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.42.2, ceph=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_BRANCH=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public) Feb 20 04:44:54 localhost systemd[1]: libpod-conmon-23d89396690df17beb8e325711949b6a5660ef9edca265f41ad89a818117e72b.scope: Deactivated successfully. Feb 20 04:44:54 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:54 localhost ceph-mon[292786]: Reconfiguring osd.2 (monmap changed)... Feb 20 04:44:54 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:44:54 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:54 localhost ceph-mon[292786]: Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:44:54 localhost systemd[1]: var-lib-containers-storage-overlay-1da66e73c71556745b7a5af2e02c990ce8a5ddc8ede63ca5f626cc9486ecb188-merged.mount: Deactivated successfully. Feb 20 04:44:54 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:54 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:54 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Feb 20 04:44:54 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Feb 20 04:44:54 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Feb 20 04:44:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:44:54 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:54 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:44:54 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:44:55 localhost podman[298480]: Feb 20 04:44:55 localhost podman[298480]: 2026-02-20 09:44:55.418564325 +0000 UTC m=+0.074466613 container create 4b1122c90d8704515e4d894f9beebf74cdd47b8ef6f9ee01a2358c6c04eebf89 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_mahavira, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, release=1770267347, name=rhceph, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=rhceph-container) Feb 20 04:44:55 localhost systemd[1]: Started libpod-conmon-4b1122c90d8704515e4d894f9beebf74cdd47b8ef6f9ee01a2358c6c04eebf89.scope. Feb 20 04:44:55 localhost systemd[1]: Started libcrun container. Feb 20 04:44:55 localhost podman[298480]: 2026-02-20 09:44:55.481630233 +0000 UTC m=+0.137532521 container init 4b1122c90d8704515e4d894f9beebf74cdd47b8ef6f9ee01a2358c6c04eebf89 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_mahavira, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , ceph=True, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:44:55 localhost podman[298480]: 2026-02-20 09:44:55.387598942 +0000 UTC m=+0.043501270 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:55 localhost podman[298480]: 2026-02-20 09:44:55.494423114 +0000 UTC m=+0.150325412 container start 4b1122c90d8704515e4d894f9beebf74cdd47b8ef6f9ee01a2358c6c04eebf89 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_mahavira, io.buildah.version=1.42.2, release=1770267347, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, name=rhceph, vcs-type=git, RELEASE=main, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, distribution-scope=public, architecture=x86_64, version=7) Feb 20 04:44:55 localhost podman[298480]: 2026-02-20 09:44:55.494669131 +0000 UTC m=+0.150571459 container attach 4b1122c90d8704515e4d894f9beebf74cdd47b8ef6f9ee01a2358c6c04eebf89 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_mahavira, release=1770267347, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, build-date=2026-02-09T10:25:24Z, distribution-scope=public, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64) Feb 20 04:44:55 localhost systemd[1]: tmp-crun.jQanVC.mount: Deactivated successfully. Feb 20 04:44:55 localhost epic_mahavira[298494]: 167 167 Feb 20 04:44:55 localhost systemd[1]: libpod-4b1122c90d8704515e4d894f9beebf74cdd47b8ef6f9ee01a2358c6c04eebf89.scope: Deactivated successfully. Feb 20 04:44:55 localhost podman[298480]: 2026-02-20 09:44:55.497746351 +0000 UTC m=+0.153648669 container died 4b1122c90d8704515e4d894f9beebf74cdd47b8ef6f9ee01a2358c6c04eebf89 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_mahavira, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, io.buildah.version=1.42.2, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:44:55 localhost podman[298499]: 2026-02-20 09:44:55.603531887 +0000 UTC m=+0.098491408 container remove 4b1122c90d8704515e4d894f9beebf74cdd47b8ef6f9ee01a2358c6c04eebf89 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_mahavira, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, build-date=2026-02-09T10:25:24Z, name=rhceph, maintainer=Guillaume Abrioux , ceph=True) Feb 20 04:44:55 localhost systemd[1]: libpod-conmon-4b1122c90d8704515e4d894f9beebf74cdd47b8ef6f9ee01a2358c6c04eebf89.scope: Deactivated successfully. Feb 20 04:44:55 localhost systemd[1]: var-lib-containers-storage-overlay-9d72d7a1bb13e86202f60fd7380b267a6a6defce49327c42e8e36713859baa26-merged.mount: Deactivated successfully. Feb 20 04:44:55 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:55 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:55 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:55 localhost ceph-mon[292786]: Reconfiguring osd.5 (monmap changed)... Feb 20 04:44:55 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:44:55 localhost ceph-mon[292786]: Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:44:55 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:55 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:44:55 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:44:55 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:44:55 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:55 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:55 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:55 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:44:55 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:44:56 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.44339 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005625203.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch Feb 20 04:44:56 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 20 04:44:56 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 20 04:44:56 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:44:56 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:56 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:56 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005625203 on np0005625203.localdomain Feb 20 04:44:56 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005625203 on np0005625203.localdomain Feb 20 04:44:56 localhost podman[298578]: Feb 20 04:44:56 localhost podman[298578]: 2026-02-20 09:44:56.437632958 +0000 UTC m=+0.086780313 container create c34f29242f8d65a3144d93910accdf2863cd0c9b6ed0992eec89067e26350e5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_vaughan, io.buildah.version=1.42.2, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , ceph=True, RELEASE=main, vcs-type=git, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:44:56 localhost systemd[1]: Started libpod-conmon-c34f29242f8d65a3144d93910accdf2863cd0c9b6ed0992eec89067e26350e5f.scope. Feb 20 04:44:56 localhost systemd[1]: Started libcrun container. Feb 20 04:44:56 localhost podman[298578]: 2026-02-20 09:44:56.499435443 +0000 UTC m=+0.148582808 container init c34f29242f8d65a3144d93910accdf2863cd0c9b6ed0992eec89067e26350e5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_vaughan, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, name=rhceph, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True) Feb 20 04:44:56 localhost podman[298578]: 2026-02-20 09:44:56.404529789 +0000 UTC m=+0.053677174 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:56 localhost podman[298578]: 2026-02-20 09:44:56.515039378 +0000 UTC m=+0.164186753 container start c34f29242f8d65a3144d93910accdf2863cd0c9b6ed0992eec89067e26350e5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_vaughan, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, ceph=True, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:44:56 localhost podman[298578]: 2026-02-20 09:44:56.515319495 +0000 UTC m=+0.164466890 container attach c34f29242f8d65a3144d93910accdf2863cd0c9b6ed0992eec89067e26350e5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_vaughan, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, maintainer=Guillaume Abrioux , ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:44:56 localhost happy_vaughan[298593]: 167 167 Feb 20 04:44:56 localhost systemd[1]: libpod-c34f29242f8d65a3144d93910accdf2863cd0c9b6ed0992eec89067e26350e5f.scope: Deactivated successfully. Feb 20 04:44:56 localhost podman[298578]: 2026-02-20 09:44:56.518933779 +0000 UTC m=+0.168081144 container died c34f29242f8d65a3144d93910accdf2863cd0c9b6ed0992eec89067e26350e5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_vaughan, io.openshift.expose-services=, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, architecture=x86_64, ceph=True, release=1770267347, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:44:56 localhost podman[298598]: 2026-02-20 09:44:56.621363028 +0000 UTC m=+0.087538334 container remove c34f29242f8d65a3144d93910accdf2863cd0c9b6ed0992eec89067e26350e5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_vaughan, io.buildah.version=1.42.2, RELEASE=main, ceph=True, version=7, release=1770267347, GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, distribution-scope=public, io.openshift.tags=rhceph ceph) Feb 20 04:44:56 localhost systemd[1]: libpod-conmon-c34f29242f8d65a3144d93910accdf2863cd0c9b6ed0992eec89067e26350e5f.scope: Deactivated successfully. Feb 20 04:44:56 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:56 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:56 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:44:56 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:44:56 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:44:56 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:56 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 20 04:44:56 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch Feb 20 04:44:56 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:56 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:56 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:44:56 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:44:56 localhost systemd[1]: tmp-crun.mLMbKO.mount: Deactivated successfully. Feb 20 04:44:56 localhost systemd[1]: var-lib-containers-storage-overlay-5bd9b465f395a567e0614520e99cbd2750d26beaf254ab82cd94f2ed1092ca41-merged.mount: Deactivated successfully. Feb 20 04:44:56 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:56 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:56 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:56 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:44:56 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:44:56 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:44:56 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:56 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:44:56 localhost ceph-mon[292786]: Deploying daemon mon.np0005625203 on np0005625203.localdomain Feb 20 04:44:56 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:56 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:56 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:56 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:44:57 localhost ceph-mon[292786]: mon.np0005625202@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:44:57 localhost podman[298668]: Feb 20 04:44:57 localhost podman[298668]: 2026-02-20 09:44:57.385355559 +0000 UTC m=+0.077076991 container create 431efaa5a52551bb1be1415dc72f56cdde3e2fcc1fadb16a747ea73a6b93a06e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_bhaskara, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, ceph=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=1770267347, version=7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, name=rhceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64) Feb 20 04:44:57 localhost systemd[1]: Started libpod-conmon-431efaa5a52551bb1be1415dc72f56cdde3e2fcc1fadb16a747ea73a6b93a06e.scope. Feb 20 04:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:44:57 localhost systemd[1]: Started libcrun container. Feb 20 04:44:57 localhost sshd[298686]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:44:57 localhost podman[298668]: 2026-02-20 09:44:57.35455759 +0000 UTC m=+0.046279032 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:44:57 localhost podman[298668]: 2026-02-20 09:44:57.462936193 +0000 UTC m=+0.154657625 container init 431efaa5a52551bb1be1415dc72f56cdde3e2fcc1fadb16a747ea73a6b93a06e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_bhaskara, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-type=git, release=1770267347, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, CEPH_POINT_RELEASE=, version=7, build-date=2026-02-09T10:25:24Z, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 04:44:57 localhost podman[298668]: 2026-02-20 09:44:57.477601774 +0000 UTC m=+0.169323216 container start 431efaa5a52551bb1be1415dc72f56cdde3e2fcc1fadb16a747ea73a6b93a06e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_bhaskara, CEPH_POINT_RELEASE=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, version=7, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True) Feb 20 04:44:57 localhost podman[298668]: 2026-02-20 09:44:57.478269721 +0000 UTC m=+0.169991353 container attach 431efaa5a52551bb1be1415dc72f56cdde3e2fcc1fadb16a747ea73a6b93a06e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_bhaskara, io.openshift.expose-services=, vcs-type=git, RELEASE=main, distribution-scope=public, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux ) Feb 20 04:44:57 localhost compassionate_bhaskara[298683]: 167 167 Feb 20 04:44:57 localhost podman[298668]: 2026-02-20 09:44:57.482238444 +0000 UTC m=+0.173959896 container died 431efaa5a52551bb1be1415dc72f56cdde3e2fcc1fadb16a747ea73a6b93a06e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_bhaskara, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, CEPH_POINT_RELEASE=, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.42.2, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git) Feb 20 04:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:44:57 localhost systemd[1]: libpod-431efaa5a52551bb1be1415dc72f56cdde3e2fcc1fadb16a747ea73a6b93a06e.scope: Deactivated successfully. Feb 20 04:44:57 localhost podman[298702]: 2026-02-20 09:44:57.588510313 +0000 UTC m=+0.095955072 container remove 431efaa5a52551bb1be1415dc72f56cdde3e2fcc1fadb16a747ea73a6b93a06e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_bhaskara, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:44:57 localhost systemd[1]: libpod-conmon-431efaa5a52551bb1be1415dc72f56cdde3e2fcc1fadb16a747ea73a6b93a06e.scope: Deactivated successfully. Feb 20 04:44:57 localhost podman[298701]: 2026-02-20 09:44:57.629526147 +0000 UTC m=+0.132361096 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller) Feb 20 04:44:57 localhost podman[298685]: 2026-02-20 09:44:57.548917415 +0000 UTC m=+0.104933955 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Feb 20 04:44:57 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:44:57 localhost podman[298701]: 2026-02-20 09:44:57.674158796 +0000 UTC m=+0.176993725 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 20 04:44:57 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:44:57 localhost podman[298685]: 2026-02-20 09:44:57.682801981 +0000 UTC m=+0.238818471 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:44:57 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:44:57 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:44:57 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:44:57 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:44:57 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:44:57 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:57 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:57 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:57 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:44:57 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:44:57 localhost systemd[1]: var-lib-containers-storage-overlay-a5b700c81e38e6d05d283799ff77b9900119251e177ad57bfa3d4b4600d1c8a2-merged.mount: Deactivated successfully. Feb 20 04:44:57 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:44:57 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:44:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:57 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:57 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:44:58 localhost openstack_network_exporter[243776]: ERROR 09:44:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:44:58 localhost openstack_network_exporter[243776]: Feb 20 04:44:58 localhost openstack_network_exporter[243776]: ERROR 09:44:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:44:58 localhost openstack_network_exporter[243776]: Feb 20 04:44:58 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:44:58 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:58 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:58 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:44:58 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:44:58 localhost ceph-mon[292786]: Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:44:58 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:44:58 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:59 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:44:59 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:44:59 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Feb 20 04:44:59 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Feb 20 04:44:59 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Feb 20 04:44:59 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:44:59 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:44:59 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:44:59 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:44:59 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:44:59 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:44:59 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:44:59 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:44:59 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:44:59 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:44:59 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:59 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:59 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:44:59 localhost ceph-mon[292786]: Reconfiguring osd.1 (monmap changed)... Feb 20 04:44:59 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:44:59 localhost ceph-mon[292786]: Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:45:00 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:45:00 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:00 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:00 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Feb 20 04:45:00 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Feb 20 04:45:00 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Feb 20 04:45:00 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:45:00 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:00 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:00 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:45:00 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:45:00 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:00 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:00 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:00 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:01 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:01 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:01 localhost ceph-mon[292786]: Reconfiguring osd.4 (monmap changed)... Feb 20 04:45:01 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:45:01 localhost ceph-mon[292786]: Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:45:01 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:01 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)... Feb 20 04:45:01 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)... Feb 20 04:45:01 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:45:01 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:45:01 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:01 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:01 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain Feb 20 04:45:01 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain Feb 20 04:45:01 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:01 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:01 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:01 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:01 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:02 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:02 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:02 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625203.lonygy (monmap changed)... Feb 20 04:45:02 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625203.lonygy (monmap changed)... Feb 20 04:45:02 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:45:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:45:02 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 20 04:45:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch Feb 20 04:45:02 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:02 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain Feb 20 04:45:02 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain Feb 20 04:45:02 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:45:02 localhost ceph-mon[292786]: mon.np0005625202@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:45:02 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:02 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:45:02 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)... Feb 20 04:45:02 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:45:02 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain Feb 20 04:45:02 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:02 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:02 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:45:02 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:45:02 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:02 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:02 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:02 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:03 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:03 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625204 (monmap changed)... Feb 20 04:45:03 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625204 (monmap changed)... Feb 20 04:45:03 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:45:03 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:45:03 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:03 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:03 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain Feb 20 04:45:03 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain Feb 20 04:45:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:45:03 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)... Feb 20 04:45:03 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain Feb 20 04:45:03 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:03 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:03 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:45:03 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:45:03 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:03 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:03 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:03 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:03 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:03 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:03 localhost systemd[1]: tmp-crun.90NTS9.mount: Deactivated successfully. Feb 20 04:45:03 localhost podman[298744]: 2026-02-20 09:45:03.454932865 +0000 UTC m=+0.093702523 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:45:03 localhost podman[298744]: 2026-02-20 09:45:03.490101278 +0000 UTC m=+0.128870896 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:45:03 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:45:03 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:03 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:03 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Feb 20 04:45:03 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Feb 20 04:45:03 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Feb 20 04:45:03 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 20 04:45:03 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:03 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:03 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005625204.localdomain Feb 20 04:45:03 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005625204.localdomain Feb 20 04:45:04 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:45:04 localhost ceph-mon[292786]: Reconfiguring crash.np0005625204 (monmap changed)... Feb 20 04:45:04 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain Feb 20 04:45:04 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:04 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:04 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 20 04:45:04 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:04 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:04 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:04 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:05 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:05 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:05 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Feb 20 04:45:05 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Feb 20 04:45:05 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Feb 20 04:45:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 20 04:45:05 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:05 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:05 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005625204.localdomain Feb 20 04:45:05 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005625204.localdomain Feb 20 04:45:05 localhost ceph-mon[292786]: Reconfiguring osd.0 (monmap changed)... Feb 20 04:45:05 localhost ceph-mon[292786]: Reconfiguring daemon osd.0 on np0005625204.localdomain Feb 20 04:45:05 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:05 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:05 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 20 04:45:05 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:05 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:05 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:05 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:05 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:45:05.912 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:45:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:45:05.913 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:45:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:45:05.913 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:45:06 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:06 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:06 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)... Feb 20 04:45:06 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)... Feb 20 04:45:06 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:45:06 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:45:06 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:06 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:06 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain Feb 20 04:45:06 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain Feb 20 04:45:06 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:45:06 localhost ceph-mon[292786]: Reconfiguring osd.3 (monmap changed)... Feb 20 04:45:06 localhost ceph-mon[292786]: Reconfiguring daemon osd.3 on np0005625204.localdomain Feb 20 04:45:06 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:06 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:06 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:45:06 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:45:06 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:06 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:06 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:06 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:07 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:07 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:07 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625204.exgrzx (monmap changed)... Feb 20 04:45:07 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625204.exgrzx (monmap changed)... Feb 20 04:45:07 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:45:07 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:45:07 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 20 04:45:07 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch Feb 20 04:45:07 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:07 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:07 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain Feb 20 04:45:07 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain Feb 20 04:45:07 localhost ceph-mon[292786]: mon.np0005625202@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:45:07 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:07 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:07 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:07 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:07 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)... Feb 20 04:45:07 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain Feb 20 04:45:07 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:07 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:07 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:45:07 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:45:07 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:07 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:08 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:08 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:08 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:45:08 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:08 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:08 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:08 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:08 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)... Feb 20 04:45:08 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain Feb 20 04:45:08 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:08 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:09 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:09 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:09 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:09 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:09 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:09 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:09 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:10 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:45:10 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Feb 20 04:45:10 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2738563585' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Feb 20 04:45:10 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:10 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:10 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:10 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:10 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:10 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:10 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:45:10 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:45:10 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:45:10 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 91b31404-8dfb-4e76-b1bf-87cbb51d8741 (Updating node-proxy deployment (+4 -> 4)) Feb 20 04:45:10 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 91b31404-8dfb-4e76-b1bf-87cbb51d8741 (Updating node-proxy deployment (+4 -> 4)) Feb 20 04:45:10 localhost ceph-mgr[286565]: [progress INFO root] Completed event 91b31404-8dfb-4e76-b1bf-87cbb51d8741 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Feb 20 04:45:10 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:45:10 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:45:10 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:10 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:10 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:45:10 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:11 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:11 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:11 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:11 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:11 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:11 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:12 localhost sshd[298854]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:45:12 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:45:12 localhost ceph-mon[292786]: mon.np0005625202@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:45:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:45:12 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:12 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:12 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:12 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:12 localhost podman[298855]: 2026-02-20 09:45:12.448534077 +0000 UTC m=+0.084734568 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:45:12 localhost podman[298855]: 2026-02-20 09:45:12.460925131 +0000 UTC m=+0.097125652 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:45:12 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:45:13 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:13 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:13 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:13 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:13 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:13 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:45:13 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:14 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:45:14 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:14 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:14 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.27552 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch Feb 20 04:45:14 localhost ceph-mgr[286565]: [cephadm INFO root] Reconfig service osd.default_drive_group Feb 20 04:45:14 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:14 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:15 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:15 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:15 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:15 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:15 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:15 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:15 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:16 localhost podman[241347]: time="2026-02-20T09:45:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:45:16 localhost podman[241347]: @ - - [20/Feb/2026:09:45:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:45:16 localhost ceph-mon[292786]: Reconfig service osd.default_drive_group Feb 20 04:45:16 localhost podman[241347]: @ - - [20/Feb/2026:09:45:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18758 "" "Go-http-client/1.1" Feb 20 04:45:16 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect) Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory Feb 20 04:45:16 localhost sshd[298879]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:45:16 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 8d98975f-ad17-4ff1-9704-1dae04731c0f (Updating node-proxy deployment (+4 -> 4)) Feb 20 04:45:16 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 8d98975f-ad17-4ff1-9704-1dae04731c0f (Updating node-proxy deployment (+4 -> 4)) Feb 20 04:45:16 localhost ceph-mgr[286565]: [progress INFO root] Completed event 8d98975f-ad17-4ff1-9704-1dae04731c0f (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon).osd e89 e89: 6 total, 6 up, 6 in Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr handle_mgr_map I was active but no longer am Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn e: '/usr/bin/ceph-mgr' Feb 20 04:45:16 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:16.592+0000 7f0d635cc640 -1 mgr handle_mgr_map I was active but no longer am Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn 0: '/usr/bin/ceph-mgr' Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn 1: '-n' Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn 2: 'mgr.np0005625202.arwxwo' Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn 3: '-f' Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn 4: '--setuser' Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn 5: 'ceph' Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn 6: '--setgroup' Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn 7: 'ceph' Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn 8: '--default-log-to-file=false' Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn 9: '--default-log-to-journald=true' Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn 10: '--default-log-to-stderr=false' Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn respawning with exe /usr/bin/ceph-mgr Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr respawn exe_path /proc/self/exe Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625202"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005625202.akhmop"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mds metadata", "who": "mds.np0005625202.akhmop"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon).mds e17 all = 0 Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005625204.wnsphl"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mds metadata", "who": "mds.np0005625204.wnsphl"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon).mds e17 all = 0 Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005625203.zsrwgk"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mds metadata", "who": "mds.np0005625203.zsrwgk"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon).mds e17 all = 0 Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625200.ypbkax", "id": "np0005625200.ypbkax"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr metadata", "who": "np0005625200.ypbkax", "id": "np0005625200.ypbkax"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625201.mtnyvu", "id": "np0005625201.mtnyvu"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr metadata", "who": "np0005625201.mtnyvu", "id": "np0005625201.mtnyvu"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata", "id": 0} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata", "id": 1} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata", "id": 2} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata", "id": 3} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata", "id": 4} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata", "id": 5} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mds metadata"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mds metadata"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon).mds e17 all = 1 Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "osd metadata"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata"} : dispatch Feb 20 04:45:16 localhost systemd[1]: session-69.scope: Deactivated successfully. Feb 20 04:45:16 localhost systemd[1]: session-69.scope: Consumed 23.272s CPU time. Feb 20 04:45:16 localhost systemd-logind[760]: Session 69 logged out. Waiting for processes to exit. Feb 20 04:45:16 localhost systemd-logind[760]: Removed session 69. Feb 20 04:45:16 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: ignoring --setuser ceph since I am not root Feb 20 04:45:16 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: ignoring --setgroup ceph since I am not root Feb 20 04:45:16 localhost ceph-mgr[286565]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mgr, pid 2 Feb 20 04:45:16 localhost ceph-mgr[286565]: pidfile_write: ignore empty --pid-file Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"} : dispatch Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr[py] Loading python module 'alerts' Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"} : dispatch Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr[py] Module alerts has missing NOTIFY_TYPES member Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr[py] Loading python module 'balancer' Feb 20 04:45:16 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:16.774+0000 7f74e5939140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625203.lonygy/mirror_snapshot_schedule"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625203.lonygy/mirror_snapshot_schedule"} : dispatch Feb 20 04:45:16 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625203.lonygy/trash_purge_schedule"} v 0) Feb 20 04:45:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625203.lonygy/trash_purge_schedule"} : dispatch Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr[py] Module balancer has missing NOTIFY_TYPES member Feb 20 04:45:16 localhost ceph-mgr[286565]: mgr[py] Loading python module 'cephadm' Feb 20 04:45:16 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:16.849+0000 7f74e5939140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Feb 20 04:45:16 localhost sshd[298905]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:45:17 localhost systemd-logind[760]: New session 70 of user ceph-admin. Feb 20 04:45:17 localhost systemd[1]: Started Session 70 of User ceph-admin. Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' Feb 20 04:45:17 localhost ceph-mon[292786]: from='client.? 172.18.0.200:0/2448153276' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 20 04:45:17 localhost ceph-mon[292786]: Activating manager daemon np0005625203.lonygy Feb 20 04:45:17 localhost ceph-mon[292786]: from='client.? 172.18.0.200:0/2448153276' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 20 04:45:17 localhost ceph-mon[292786]: Manager daemon np0005625203.lonygy is now available Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"} : dispatch Feb 20 04:45:17 localhost ceph-mon[292786]: removing stray HostCache host record np0005625200.localdomain.devices.0 Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"} : dispatch Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"}]': finished Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"} : dispatch Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"} : dispatch Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"}]': finished Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625203.lonygy/mirror_snapshot_schedule"} : dispatch Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625203.lonygy/mirror_snapshot_schedule"} : dispatch Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625203.lonygy/trash_purge_schedule"} : dispatch Feb 20 04:45:17 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625203.lonygy/trash_purge_schedule"} : dispatch Feb 20 04:45:17 localhost ceph-mon[292786]: mon.np0005625202@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:45:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:45:17 localhost ceph-mgr[286565]: mgr[py] Loading python module 'crash' Feb 20 04:45:17 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:17 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:17.502+0000 7f74e5939140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Feb 20 04:45:17 localhost ceph-mgr[286565]: mgr[py] Module crash has missing NOTIFY_TYPES member Feb 20 04:45:17 localhost ceph-mgr[286565]: mgr[py] Loading python module 'dashboard' Feb 20 04:45:17 localhost sshd[299008]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Loading python module 'devicehealth' Feb 20 04:45:18 localhost systemd[1]: tmp-crun.2fUSyu.mount: Deactivated successfully. Feb 20 04:45:18 localhost podman[299022]: 2026-02-20 09:45:18.0632918 +0000 UTC m=+0.101449269 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, release=1770267347, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, RELEASE=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-type=git, GIT_BRANCH=main) Feb 20 04:45:18 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:18.066+0000 7f74e5939140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Loading python module 'diskprediction_local' Feb 20 04:45:18 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Feb 20 04:45:18 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Feb 20 04:45:18 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: from numpy import show_config as show_numpy_config Feb 20 04:45:18 localhost podman[299022]: 2026-02-20 09:45:18.205914053 +0000 UTC m=+0.244071532 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, architecture=x86_64, release=1770267347, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , ceph=True, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:45:18 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:18.208+0000 7f74e5939140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Loading python module 'influx' Feb 20 04:45:18 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:18.269+0000 7f74e5939140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Module influx has missing NOTIFY_TYPES member Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Loading python module 'insights' Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Loading python module 'iostat' Feb 20 04:45:18 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:18.385+0000 7f74e5939140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Module iostat has missing NOTIFY_TYPES member Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Loading python module 'k8sevents' Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Loading python module 'localpool' Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Loading python module 'mds_autoscaler' Feb 20 04:45:18 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:45:18 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:45:18 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:45:18 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:18 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:45:18 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Loading python module 'mirroring' Feb 20 04:45:18 localhost ceph-mgr[286565]: mgr[py] Loading python module 'nfs' Feb 20 04:45:19 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:19 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:19 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:19.120+0000 7f74e5939140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Module nfs has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Loading python module 'orchestrator' Feb 20 04:45:19 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:19.317+0000 7f74e5939140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Loading python module 'osd_perf_query' Feb 20 04:45:19 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:19.388+0000 7f74e5939140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Loading python module 'osd_support' Feb 20 04:45:19 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:19.446+0000 7f74e5939140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Loading python module 'pg_autoscaler' Feb 20 04:45:19 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:19 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:19.514+0000 7f74e5939140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Loading python module 'progress' Feb 20 04:45:19 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:19 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:19 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:19 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:19.575+0000 7f74e5939140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Module progress has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Loading python module 'prometheus' Feb 20 04:45:19 localhost ceph-mon[292786]: [20/Feb/2026:09:45:18] ENGINE Bus STARTING Feb 20 04:45:19 localhost ceph-mon[292786]: [20/Feb/2026:09:45:18] ENGINE Serving on http://172.18.0.107:8765 Feb 20 04:45:19 localhost ceph-mon[292786]: [20/Feb/2026:09:45:18] ENGINE Serving on https://172.18.0.107:7150 Feb 20 04:45:19 localhost ceph-mon[292786]: [20/Feb/2026:09:45:18] ENGINE Bus STARTED Feb 20 04:45:19 localhost ceph-mon[292786]: [20/Feb/2026:09:45:18] ENGINE Client ('172.18.0.107', 58052) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 20 04:45:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:19 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:19.872+0000 7f74e5939140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Loading python module 'rbd_support' Feb 20 04:45:19 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:19.953+0000 7f74e5939140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Feb 20 04:45:19 localhost ceph-mgr[286565]: mgr[py] Loading python module 'restful' Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Loading python module 'rgw' Feb 20 04:45:20 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:20.270+0000 7f74e5939140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Module rgw has missing NOTIFY_TYPES member Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Loading python module 'rook' Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:20 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:20.663+0000 7f74e5939140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Module rook has missing NOTIFY_TYPES member Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Loading python module 'selftest' Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:20 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:20.725+0000 7f74e5939140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Module selftest has missing NOTIFY_TYPES member Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Loading python module 'snap_schedule' Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:20 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:45:20 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Loading python module 'stats' Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Loading python module 'status' Feb 20 04:45:20 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:20.917+0000 7f74e5939140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Module status has missing NOTIFY_TYPES member Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Loading python module 'telegraf' Feb 20 04:45:20 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:20.979+0000 7f74e5939140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Feb 20 04:45:20 localhost ceph-mgr[286565]: mgr[py] Loading python module 'telemetry' Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:45:21 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:45:21 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:45:21 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:45:21 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:45:21 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:45:21 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:45:21 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:45:21 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:21.112+0000 7f74e5939140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Feb 20 04:45:21 localhost ceph-mgr[286565]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Feb 20 04:45:21 localhost ceph-mgr[286565]: mgr[py] Loading python module 'test_orchestrator' Feb 20 04:45:21 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:21.258+0000 7f74e5939140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Feb 20 04:45:21 localhost ceph-mgr[286565]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Feb 20 04:45:21 localhost ceph-mgr[286565]: mgr[py] Loading python module 'volumes' Feb 20 04:45:21 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:21.465+0000 7f74e5939140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Feb 20 04:45:21 localhost ceph-mgr[286565]: mgr[py] Module volumes has missing NOTIFY_TYPES member Feb 20 04:45:21 localhost ceph-mgr[286565]: mgr[py] Loading python module 'zabbix' Feb 20 04:45:21 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:45:21.525+0000 7f74e5939140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Feb 20 04:45:21 localhost ceph-mgr[286565]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Feb 20 04:45:21 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x55d2fffb51e0 mon_map magic: 0 from mon.2 v2:172.18.0.103:3300/0 Feb 20 04:45:21 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.107:6810/2084071713 Feb 20 04:45:21 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:21 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:21 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:21 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:22 localhost ceph-mon[292786]: mon.np0005625202@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:45:22 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:45:22 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:45:22 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:45:22 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:45:22 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} v 0) Feb 20 04:45:22 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} : dispatch Feb 20 04:45:22 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:22 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:45:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:45:23 localhost podman[299817]: 2026-02-20 09:45:23.202495709 +0000 UTC m=+0.085899089 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, version=9.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible) Feb 20 04:45:23 localhost podman[299817]: 2026-02-20 09:45:23.215014366 +0000 UTC m=+0.098417766 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.) Feb 20 04:45:23 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:45:23 localhost systemd[1]: tmp-crun.AJgQLb.mount: Deactivated successfully. Feb 20 04:45:23 localhost podman[299818]: 2026-02-20 09:45:23.314356256 +0000 UTC m=+0.193159943 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Feb 20 04:45:23 localhost podman[299818]: 2026-02-20 09:45:23.351120924 +0000 UTC m=+0.229924551 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true) Feb 20 04:45:23 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:45:23 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:45:23 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:45:23 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:45:23 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:45:23 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:45:23 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:45:23 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:45:23 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:45:23 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:23 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:45:23 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:45:23 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:23 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:23 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:45:23 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:24 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:24 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:24 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:45:24 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:45:24 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:45:24 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Feb 20 04:45:24 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:45:24 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:24 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:24 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:45:24 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:24 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:24 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:24 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:24 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:24 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:24 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:24 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:24 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:24 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:45:24 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:24 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:25 localhost podman[300017]: Feb 20 04:45:25 localhost podman[300017]: 2026-02-20 09:45:25.086372204 +0000 UTC m=+0.068860392 container create 6ad4d358f79465f16a71785468d3ba4acfd74381d4fba8a151ffa671dc759969 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_booth, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=) Feb 20 04:45:25 localhost systemd[1]: Started libpod-conmon-6ad4d358f79465f16a71785468d3ba4acfd74381d4fba8a151ffa671dc759969.scope. Feb 20 04:45:25 localhost systemd[1]: Started libcrun container. Feb 20 04:45:25 localhost podman[300017]: 2026-02-20 09:45:25.157223898 +0000 UTC m=+0.139712086 container init 6ad4d358f79465f16a71785468d3ba4acfd74381d4fba8a151ffa671dc759969 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_booth, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, release=1770267347, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, maintainer=Guillaume Abrioux , RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Feb 20 04:45:25 localhost podman[300017]: 2026-02-20 09:45:25.060335923 +0000 UTC m=+0.042824141 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:45:25 localhost podman[300017]: 2026-02-20 09:45:25.168602934 +0000 UTC m=+0.151091132 container start 6ad4d358f79465f16a71785468d3ba4acfd74381d4fba8a151ffa671dc759969 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_booth, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, version=7, io.buildah.version=1.42.2, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc.) Feb 20 04:45:25 localhost podman[300017]: 2026-02-20 09:45:25.169022685 +0000 UTC m=+0.151510913 container attach 6ad4d358f79465f16a71785468d3ba4acfd74381d4fba8a151ffa671dc759969 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_booth, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vendor=Red Hat, Inc., release=1770267347, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph) Feb 20 04:45:25 localhost serene_booth[300033]: 167 167 Feb 20 04:45:25 localhost systemd[1]: libpod-6ad4d358f79465f16a71785468d3ba4acfd74381d4fba8a151ffa671dc759969.scope: Deactivated successfully. Feb 20 04:45:25 localhost podman[300017]: 2026-02-20 09:45:25.17442726 +0000 UTC m=+0.156915448 container died 6ad4d358f79465f16a71785468d3ba4acfd74381d4fba8a151ffa671dc759969 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_booth, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, ceph=True, version=7, vcs-type=git, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, io.openshift.expose-services=, release=1770267347, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.) Feb 20 04:45:25 localhost podman[300038]: 2026-02-20 09:45:25.387654191 +0000 UTC m=+0.199612856 container remove 6ad4d358f79465f16a71785468d3ba4acfd74381d4fba8a151ffa671dc759969 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_booth, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, release=1770267347, GIT_BRANCH=main, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , distribution-scope=public, build-date=2026-02-09T10:25:24Z, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7) Feb 20 04:45:25 localhost systemd[1]: libpod-conmon-6ad4d358f79465f16a71785468d3ba4acfd74381d4fba8a151ffa671dc759969.scope: Deactivated successfully. Feb 20 04:45:25 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:25 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:25 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:45:25 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:25 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:26 localhost systemd[1]: var-lib-containers-storage-overlay-d77d73e99506a1a9436c482e3875eed0141eb1a21fc187da51718af9d46cedf9-merged.mount: Deactivated successfully. Feb 20 04:45:26 localhost ceph-mon[292786]: Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:45:26 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:45:26 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:45:26 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:45:26 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Feb 20 04:45:26 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:45:26 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:26 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:26 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:26 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:26 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:45:26 localhost podman[300116]: Feb 20 04:45:26 localhost podman[300116]: 2026-02-20 09:45:26.906939647 +0000 UTC m=+0.074856544 container create e7aea42811e313c54af470ceaacbea558866fd9eb1f9b05b2661428f5955a4c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_goodall, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, io.buildah.version=1.42.2, ceph=True, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, architecture=x86_64, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:45:26 localhost systemd[1]: Started libpod-conmon-e7aea42811e313c54af470ceaacbea558866fd9eb1f9b05b2661428f5955a4c9.scope. Feb 20 04:45:26 localhost systemd[1]: Started libcrun container. Feb 20 04:45:26 localhost podman[300116]: 2026-02-20 09:45:26.875804249 +0000 UTC m=+0.043721136 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:45:26 localhost podman[300116]: 2026-02-20 09:45:26.977727989 +0000 UTC m=+0.145644856 container init e7aea42811e313c54af470ceaacbea558866fd9eb1f9b05b2661428f5955a4c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_goodall, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_BRANCH=main, release=1770267347, ceph=True) Feb 20 04:45:26 localhost podman[300116]: 2026-02-20 09:45:26.987162982 +0000 UTC m=+0.155079839 container start e7aea42811e313c54af470ceaacbea558866fd9eb1f9b05b2661428f5955a4c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_goodall, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1770267347, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, ceph=True, version=7, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, name=rhceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:45:26 localhost infallible_goodall[300131]: 167 167 Feb 20 04:45:26 localhost podman[300116]: 2026-02-20 09:45:26.989596918 +0000 UTC m=+0.157513785 container attach e7aea42811e313c54af470ceaacbea558866fd9eb1f9b05b2661428f5955a4c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_goodall, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, release=1770267347, name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:45:26 localhost systemd[1]: libpod-e7aea42811e313c54af470ceaacbea558866fd9eb1f9b05b2661428f5955a4c9.scope: Deactivated successfully. Feb 20 04:45:26 localhost podman[300116]: 2026-02-20 09:45:26.993598295 +0000 UTC m=+0.161515192 container died e7aea42811e313c54af470ceaacbea558866fd9eb1f9b05b2661428f5955a4c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_goodall, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, version=7, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container) Feb 20 04:45:27 localhost systemd[1]: var-lib-containers-storage-overlay-7f408a633fb77ae96d6ff931be2763008a15a5e081af8824e7056eec0819d3b3-merged.mount: Deactivated successfully. Feb 20 04:45:27 localhost podman[300136]: 2026-02-20 09:45:27.098210887 +0000 UTC m=+0.092100906 container remove e7aea42811e313c54af470ceaacbea558866fd9eb1f9b05b2661428f5955a4c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_goodall, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, io.buildah.version=1.42.2, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:45:27 localhost systemd[1]: libpod-conmon-e7aea42811e313c54af470ceaacbea558866fd9eb1f9b05b2661428f5955a4c9.scope: Deactivated successfully. Feb 20 04:45:27 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:27 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:27 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:27 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:27 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:45:27 localhost ceph-mon[292786]: Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:45:27 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:27 localhost ceph-mon[292786]: mon.np0005625202@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:45:27 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:45:27 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:45:27 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:45:27 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:45:27 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Feb 20 04:45:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:45:27 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:27 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:27 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:27 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:27 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:27 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:45:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:45:27 localhost podman[300160]: 2026-02-20 09:45:27.996787969 +0000 UTC m=+0.085478749 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:45:28 localhost podman[300161]: 2026-02-20 09:45:28.045249982 +0000 UTC m=+0.132386019 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:45:28 localhost podman[300160]: 2026-02-20 09:45:28.06566717 +0000 UTC m=+0.154357970 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:45:28 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:45:28 localhost podman[300161]: 2026-02-20 09:45:28.077732264 +0000 UTC m=+0.164868301 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 04:45:28 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:45:28 localhost openstack_network_exporter[243776]: ERROR 09:45:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:45:28 localhost openstack_network_exporter[243776]: Feb 20 04:45:28 localhost openstack_network_exporter[243776]: ERROR 09:45:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:45:28 localhost openstack_network_exporter[243776]: Feb 20 04:45:28 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:28 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:28 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:28 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:28 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:45:28 localhost ceph-mon[292786]: Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:45:28 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:28 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:28 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:28 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:28 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Feb 20 04:45:28 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:45:28 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:28 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:28 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:28 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:29 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:29 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:29 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:29 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:29 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:29 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:45:29 localhost ceph-mon[292786]: Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:45:29 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:29 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:29 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:29 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Feb 20 04:45:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 20 04:45:29 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:29 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:29 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:29 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:29 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:29 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:30 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:30 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:30 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:30 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:30 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 20 04:45:30 localhost ceph-mon[292786]: Reconfiguring daemon osd.0 on np0005625204.localdomain Feb 20 04:45:30 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:30 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:30 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:30 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:30 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:30 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:30 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Feb 20 04:45:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 20 04:45:30 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:30 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:30 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 20 04:45:31 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:31 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:31 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:31 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:31 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 20 04:45:31 localhost ceph-mon[292786]: Reconfiguring daemon osd.3 on np0005625204.localdomain Feb 20 04:45:31 localhost ceph-mon[292786]: Saving service mon spec with placement label:mon Feb 20 04:45:31 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:31 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:31 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:31 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:31 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:31 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:31 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:31 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 20 04:45:31 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:45:31 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 20 04:45:31 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 20 04:45:31 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:31 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:31 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:31 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:32 localhost ceph-mon[292786]: mon.np0005625202@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:45:32 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:32 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:32 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:32 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:32 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:45:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:45:32 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:32 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:32 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:32 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:32 localhost ceph-mon[292786]: Reconfiguring mon.np0005625204 (monmap changed)... Feb 20 04:45:32 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:45:32 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain Feb 20 04:45:32 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:32 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:32 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:45:32 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.541497) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580732541540, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2846, "num_deletes": 256, "total_data_size": 8904954, "memory_usage": 9185488, "flush_reason": "Manual Compaction"} Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Feb 20 04:45:32 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:32 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580732574590, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 5327455, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14154, "largest_seqno": 16995, "table_properties": {"data_size": 5315499, "index_size": 7501, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 30618, "raw_average_key_size": 22, "raw_value_size": 5289399, "raw_average_value_size": 3920, "num_data_blocks": 326, "num_entries": 1349, "num_filter_entries": 1349, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580670, "oldest_key_time": 1771580670, "file_creation_time": 1771580732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 33161 microseconds, and 10659 cpu microseconds. Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.574651) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 5327455 bytes OK Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.574679) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.577924) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.577947) EVENT_LOG_v1 {"time_micros": 1771580732577939, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.577972) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 8891040, prev total WAL file size 8891731, number of live WAL files 2. Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.580018) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end) Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(5202KB)], [24(14MB)] Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580732580425, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 20516665, "oldest_snapshot_seqno": -1} Feb 20 04:45:32 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 20 04:45:32 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:45:32 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 10935 keys, 18161164 bytes, temperature: kUnknown Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580732684803, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 18161164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18098565, "index_size": 34146, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27397, "raw_key_size": 291598, "raw_average_key_size": 26, "raw_value_size": 17911901, "raw_average_value_size": 1638, "num_data_blocks": 1308, "num_entries": 10935, "num_filter_entries": 10935, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771580732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.685065) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 18161164 bytes Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.687021) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.4 rd, 173.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.1, 14.5 +0.0 blob) out(17.3 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 11484, records dropped: 549 output_compression: NoCompression Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.687052) EVENT_LOG_v1 {"time_micros": 1771580732687038, "job": 12, "event": "compaction_finished", "compaction_time_micros": 104454, "compaction_time_cpu_micros": 41389, "output_level": 6, "num_output_files": 1, "total_output_size": 18161164, "num_input_records": 11484, "num_output_records": 10935, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580732687963, "job": 12, "event": "table_file_deletion", "file_number": 26} Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580732689876, "job": 12, "event": "table_file_deletion", "file_number": 24} Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.579785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.689936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.689942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.689944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.689946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:45:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:45:32.689948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:45:32 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 20 04:45:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:45:32 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 20 04:45:32 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 20 04:45:32 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:32 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:33 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:33 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:33 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:33 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:33 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:45:33 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:33 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:45:33 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:45:33 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 20 04:45:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:45:33 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 20 04:45:33 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 20 04:45:33 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:33 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:45:34 localhost podman[300237]: 2026-02-20 09:45:34.086861767 +0000 UTC m=+0.069596192 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:45:34 localhost podman[300237]: 2026-02-20 09:45:34.095971711 +0000 UTC m=+0.078706156 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:45:34 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:45:34 localhost podman[300295]: Feb 20 04:45:34 localhost podman[300295]: 2026-02-20 09:45:34.531862527 +0000 UTC m=+0.077241447 container create 24d6d8195d4f8265e007fb403137342ffed2adf521b3fa6bfea562ffefbaaf0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_chatterjee, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347) Feb 20 04:45:34 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:34 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:34 localhost ceph-mon[292786]: Reconfiguring mon.np0005625201 (monmap changed)... Feb 20 04:45:34 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain Feb 20 04:45:34 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:34 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:34 localhost ceph-mon[292786]: Reconfiguring mon.np0005625202 (monmap changed)... Feb 20 04:45:34 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:45:34 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:45:34 localhost systemd[1]: Started libpod-conmon-24d6d8195d4f8265e007fb403137342ffed2adf521b3fa6bfea562ffefbaaf0e.scope. Feb 20 04:45:34 localhost systemd[1]: Started libcrun container. Feb 20 04:45:34 localhost podman[300295]: 2026-02-20 09:45:34.501080839 +0000 UTC m=+0.046459779 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:45:34 localhost podman[300295]: 2026-02-20 09:45:34.608350303 +0000 UTC m=+0.153729223 container init 24d6d8195d4f8265e007fb403137342ffed2adf521b3fa6bfea562ffefbaaf0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_chatterjee, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux , release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, RELEASE=main, build-date=2026-02-09T10:25:24Z, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:45:34 localhost podman[300295]: 2026-02-20 09:45:34.621045714 +0000 UTC m=+0.166424624 container start 24d6d8195d4f8265e007fb403137342ffed2adf521b3fa6bfea562ffefbaaf0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_chatterjee, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, distribution-scope=public, ceph=True, release=1770267347, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=) Feb 20 04:45:34 localhost podman[300295]: 2026-02-20 09:45:34.621914138 +0000 UTC m=+0.167293108 container attach 24d6d8195d4f8265e007fb403137342ffed2adf521b3fa6bfea562ffefbaaf0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_chatterjee, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, version=7, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=) Feb 20 04:45:34 localhost cool_chatterjee[300310]: 167 167 Feb 20 04:45:34 localhost systemd[1]: libpod-24d6d8195d4f8265e007fb403137342ffed2adf521b3fa6bfea562ffefbaaf0e.scope: Deactivated successfully. Feb 20 04:45:34 localhost podman[300295]: 2026-02-20 09:45:34.62648514 +0000 UTC m=+0.171864110 container died 24d6d8195d4f8265e007fb403137342ffed2adf521b3fa6bfea562ffefbaaf0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_chatterjee, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, name=rhceph, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-type=git, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:45:34 localhost podman[300316]: 2026-02-20 09:45:34.676111074 +0000 UTC m=+0.044422405 container remove 24d6d8195d4f8265e007fb403137342ffed2adf521b3fa6bfea562ffefbaaf0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_chatterjee, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , version=7, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, release=1770267347, distribution-scope=public, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:45:34 localhost systemd[1]: libpod-conmon-24d6d8195d4f8265e007fb403137342ffed2adf521b3fa6bfea562ffefbaaf0e.scope: Deactivated successfully. Feb 20 04:45:34 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:45:34 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:45:35 localhost systemd[1]: tmp-crun.ZPhB1e.mount: Deactivated successfully. Feb 20 04:45:35 localhost systemd[1]: var-lib-containers-storage-overlay-2e10e5ae7c8229733cb3095e5ee79b51320091ce2708fb2afbb18c0ba71a7923-merged.mount: Deactivated successfully. Feb 20 04:45:35 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:35 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:35 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:35 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:35 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:35 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:36 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:36 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:36 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:45:37 localhost ceph-mon[292786]: mon.np0005625202@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:45:37 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:37 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:37 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Feb 20 04:45:37 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:38 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:38 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:39 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x55d2fffb51e0 mon_map magic: 0 from mon.2 v2:172.18.0.103:3300/0 Feb 20 04:45:39 localhost ceph-mon[292786]: mon.np0005625202@2(probing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:45:39 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:45:39 localhost ceph-mon[292786]: mon.np0005625202@2(probing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625202"} v 0) Feb 20 04:45:39 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch Feb 20 04:45:39 localhost ceph-mon[292786]: mon.np0005625202@2(probing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:39 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:39 localhost ceph-mon[292786]: mon.np0005625202@2(probing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0) Feb 20 04:45:39 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch Feb 20 04:45:39 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : mon.np0005625202 calling monitor election Feb 20 04:45:39 localhost ceph-mon[292786]: paxos.2).electionLogic(46) init, last seen epoch 46 Feb 20 04:45:39 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:45:39 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:45:39 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:39 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:40 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:40 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:41 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:41 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:42 localhost nova_compute[280804]: 2026-02-20 09:45:42.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:42 localhost nova_compute[280804]: 2026-02-20 09:45:42.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 20 04:45:42 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:42 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:45:43 localhost systemd[293679]: Starting Mark boot as successful... Feb 20 04:45:43 localhost podman[300332]: 2026-02-20 09:45:43.452411741 +0000 UTC m=+0.083696010 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:45:43 localhost systemd[293679]: Finished Mark boot as successful. Feb 20 04:45:43 localhost podman[300332]: 2026-02-20 09:45:43.464843845 +0000 UTC m=+0.096128144 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:45:43 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:45:43 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:43 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:43 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e12 handle_auth_request failed to assign global_id Feb 20 04:45:43 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e12 handle_auth_request failed to assign global_id Feb 20 04:45:44 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e12 handle_auth_request failed to assign global_id Feb 20 04:45:44 localhost ceph-mon[292786]: mon.np0005625202@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:45:44 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:45:44 localhost ceph-mon[292786]: mon.np0005625201 calling monitor election Feb 20 04:45:44 localhost ceph-mon[292786]: mon.np0005625204 calling monitor election Feb 20 04:45:44 localhost ceph-mon[292786]: mon.np0005625202 calling monitor election Feb 20 04:45:44 localhost ceph-mon[292786]: mon.np0005625203 calling monitor election Feb 20 04:45:44 localhost ceph-mon[292786]: mon.np0005625201 is new leader, mons np0005625201,np0005625204,np0005625202,np0005625203 in quorum (ranks 0,1,2,3) Feb 20 04:45:44 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:45:44 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:44 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:45 localhost nova_compute[280804]: 2026-02-20 09:45:45.525 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:45 localhost nova_compute[280804]: 2026-02-20 09:45:45.526 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 20 04:45:45 localhost nova_compute[280804]: 2026-02-20 09:45:45.548 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 20 04:45:46 localhost podman[241347]: time="2026-02-20T09:45:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:45:46 localhost podman[241347]: @ - - [20/Feb/2026:09:45:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:45:46 localhost podman[241347]: @ - - [20/Feb/2026:09:45:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18753 "" "Go-http-client/1.1" Feb 20 04:45:46 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e12 handle_command mon_command({"prefix": "quorum_status"} v 0) Feb 20 04:45:46 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "quorum_status"} : dispatch Feb 20 04:45:46 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e12 handle_command mon_command({"prefix": "mon rm", "name": "np0005625201"} v 0) Feb 20 04:45:46 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon rm", "name": "np0005625201"} : dispatch Feb 20 04:45:46 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x55d2fffb4f20 mon_map magic: 0 from mon.2 v2:172.18.0.103:3300/0 Feb 20 04:45:46 localhost ceph-mon[292786]: mon.np0005625202@2(peon) e13 my rank is now 1 (was 2) Feb 20 04:45:46 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 20 04:45:46 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 20 04:45:46 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x55d2fffb5600 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Feb 20 04:45:46 localhost ceph-mon[292786]: mon.np0005625202@1(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625202"} v 0) Feb 20 04:45:46 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch Feb 20 04:45:46 localhost ceph-mon[292786]: mon.np0005625202@1(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:46 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:46 localhost ceph-mon[292786]: mon.np0005625202@1(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0) Feb 20 04:45:46 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch Feb 20 04:45:46 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : mon.np0005625202 calling monitor election Feb 20 04:45:46 localhost ceph-mon[292786]: paxos.1).electionLogic(50) init, last seen epoch 50 Feb 20 04:45:46 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:45:46 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:45:46 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:46 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:46 localhost ceph-osd[31981]: --2- [v2:172.18.0.106:6800/2532773716,v1:172.18.0.106:6801/2532773716] >> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] conn(0x55d381851400 0x55d380b7e580 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2 Feb 20 04:45:46 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:47 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:47 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:47 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:47 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:48 localhost sshd[300356]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:45:48 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:48 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:48 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:49 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:49 localhost nova_compute[280804]: 2026-02-20 09:45:49.533 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:49 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:49 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:50 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:50 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:50 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:50 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:50 localhost nova_compute[280804]: 2026-02-20 09:45:50.506 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:50 localhost nova_compute[280804]: 2026-02-20 09:45:50.521 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:50 localhost nova_compute[280804]: 2026-02-20 09:45:50.521 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:45:50 localhost nova_compute[280804]: 2026-02-20 09:45:50.522 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:50 localhost nova_compute[280804]: 2026-02-20 09:45:50.537 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:45:50 localhost nova_compute[280804]: 2026-02-20 09:45:50.538 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:45:50 localhost nova_compute[280804]: 2026-02-20 09:45:50.538 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:45:50 localhost nova_compute[280804]: 2026-02-20 09:45:50.538 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:45:50 localhost nova_compute[280804]: 2026-02-20 09:45:50.539 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:45:50 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:50 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:50 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:50 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:51 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:51 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:51 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:51 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:45:51 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:45:51 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:45:51 localhost ceph-mon[292786]: Remove daemons mon.np0005625201 Feb 20 04:45:51 localhost ceph-mon[292786]: Safe to remove mon.np0005625201: new quorum should be ['np0005625204', 'np0005625202', 'np0005625203'] (from ['np0005625204', 'np0005625202', 'np0005625203']) Feb 20 04:45:51 localhost ceph-mon[292786]: Removing monitor np0005625201 from monmap... Feb 20 04:45:51 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon rm", "name": "np0005625201"} : dispatch Feb 20 04:45:51 localhost ceph-mon[292786]: Removing daemon mon.np0005625201 from np0005625201.localdomain -- ports [] Feb 20 04:45:51 localhost ceph-mon[292786]: mon.np0005625204 calling monitor election Feb 20 04:45:51 localhost ceph-mon[292786]: mon.np0005625202 calling monitor election Feb 20 04:45:51 localhost ceph-mon[292786]: mon.np0005625204 is new leader, mons np0005625204,np0005625202 in quorum (ranks 0,1) Feb 20 04:45:51 localhost ceph-mon[292786]: Health check failed: 1/3 mons down, quorum np0005625204,np0005625202 (MON_DOWN) Feb 20 04:45:51 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:45:51 localhost ceph-mon[292786]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005625204,np0005625202 Feb 20 04:45:51 localhost ceph-mon[292786]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005625204,np0005625202 Feb 20 04:45:51 localhost ceph-mon[292786]: mon.np0005625203 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum) Feb 20 04:45:52 localhost ceph-mon[292786]: mon.np0005625202@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:45:52 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:45:52 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3661915845' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.405 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.867s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.538 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.539 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11994MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.539 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.539 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:45:52 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : mon.np0005625202 calling monitor election Feb 20 04:45:52 localhost ceph-mon[292786]: paxos.1).electionLogic(53) init, last seen epoch 53, mid-election, bumping Feb 20 04:45:52 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:45:52 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.719 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.720 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.773 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Refreshing inventories for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.793 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Updating ProviderTree inventory for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.793 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Updating inventory in ProviderTree for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.832 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Refreshing aggregate associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 20 04:45:52 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.859 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Refreshing trait associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, traits: COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 20 04:45:52 localhost nova_compute[280804]: 2026-02-20 09:45:52.873 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:45:52 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:45:52 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:53 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:53 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:53 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:45:53 localhost podman[300711]: 2026-02-20 09:45:53.453834405 +0000 UTC m=+0.089535948 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, managed_by=edpm_ansible, release=1770267347, version=9.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z) Feb 20 04:45:53 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 handle_auth_request failed to assign global_id Feb 20 04:45:53 localhost podman[300711]: 2026-02-20 09:45:53.475332723 +0000 UTC m=+0.111034326 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., distribution-scope=public, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible) Feb 20 04:45:53 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:45:53 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:45:53 localhost podman[300727]: 2026-02-20 09:45:53.547842911 +0000 UTC m=+0.085297893 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:45:53 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf Feb 20 04:45:53 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:45:53 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:45:53 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:45:53 localhost podman[300727]: 2026-02-20 09:45:53.563459711 +0000 UTC m=+0.100914673 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:45:53 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:45:53 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:45:53 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:45:53 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2284102821' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:45:53 localhost nova_compute[280804]: 2026-02-20 09:45:53.939 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:45:53 localhost nova_compute[280804]: 2026-02-20 09:45:53.945 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:45:53 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:45:53 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3624088180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:45:53 localhost nova_compute[280804]: 2026-02-20 09:45:53.959 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:45:53 localhost nova_compute[280804]: 2026-02-20 09:45:53.961 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:45:53 localhost nova_compute[280804]: 2026-02-20 09:45:53.961 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:45:53 localhost nova_compute[280804]: 2026-02-20 09:45:53.962 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 20 04:45:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 20 04:45:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:45:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625203 calling monitor election Feb 20 04:45:54 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:45:54 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:45:54 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:45:54 localhost ceph-mon[292786]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625202 calling monitor election Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625204 calling monitor election Feb 20 04:45:54 localhost ceph-mon[292786]: mon.np0005625204 is new leader, mons np0005625204,np0005625202,np0005625203 in quorum (ranks 0,1,2) Feb 20 04:45:54 localhost ceph-mon[292786]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005625204,np0005625202) Feb 20 04:45:54 localhost ceph-mon[292786]: Cluster is now healthy Feb 20 04:45:54 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:45:54 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:54 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:54 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:54 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:54 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:54 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:54 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:54 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:54 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:54 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:45:54 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:54 localhost nova_compute[280804]: 2026-02-20 09:45:54.961 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:54 localhost nova_compute[280804]: 2026-02-20 09:45:54.962 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:54 localhost nova_compute[280804]: 2026-02-20 09:45:54.962 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:45:54 localhost nova_compute[280804]: 2026-02-20 09:45:54.962 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:45:54 localhost nova_compute[280804]: 2026-02-20 09:45:54.981 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:45:54 localhost nova_compute[280804]: 2026-02-20 09:45:54.981 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:54 localhost nova_compute[280804]: 2026-02-20 09:45:54.982 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:54 localhost nova_compute[280804]: 2026-02-20 09:45:54.983 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:54 localhost nova_compute[280804]: 2026-02-20 09:45:54.983 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:45:55 localhost ceph-mon[292786]: Deploying daemon mon.np0005625201 on np0005625201.localdomain Feb 20 04:45:55 localhost ceph-mon[292786]: Removed label mon from host np0005625201.localdomain Feb 20 04:45:55 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 20 04:45:56 localhost sshd[300759]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:45:56 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:45:56 localhost ceph-mon[292786]: Removed label mgr from host np0005625201.localdomain Feb 20 04:45:56 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:45:57 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:45:57 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:45:57 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x55d2fffb5600 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:45:57 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625202"} v 0) Feb 20 04:45:57 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch Feb 20 04:45:57 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : mon.np0005625202 calling monitor election Feb 20 04:45:57 localhost ceph-mon[292786]: paxos.1).electionLogic(56) init, last seen epoch 56 Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:45:57 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:45:57 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0) Feb 20 04:45:57 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch Feb 20 04:45:58 localhost openstack_network_exporter[243776]: ERROR 09:45:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:45:58 localhost openstack_network_exporter[243776]: Feb 20 04:45:58 localhost openstack_network_exporter[243776]: ERROR 09:45:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:45:58 localhost openstack_network_exporter[243776]: Feb 20 04:45:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:45:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:45:58 localhost podman[300780]: 2026-02-20 09:45:58.442366194 +0000 UTC m=+0.077994647 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 20 04:45:58 localhost podman[300779]: 2026-02-20 09:45:58.501856013 +0000 UTC m=+0.139607403 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:45:58 localhost podman[300780]: 2026-02-20 09:45:58.524197004 +0000 UTC m=+0.159825457 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127) Feb 20 04:45:58 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:45:58 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:45:58 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:45:58 localhost podman[300779]: 2026-02-20 09:45:58.566818529 +0000 UTC m=+0.204569869 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127) Feb 20 04:45:58 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:45:59 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:45:59 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:45:59 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:46:00 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:46:00 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:46:01 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:46:01 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:46:01 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 handle_auth_request failed to assign global_id Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 handle_auth_request failed to assign global_id Feb 20 04:46:02 localhost ceph-mds[283306]: mds.beacon.mds.np0005625202.akhmop missed beacon ack from the monitors Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:46:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:02 localhost ceph-mon[292786]: Removed label _admin from host np0005625201.localdomain Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625204 calling monitor election Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625202 calling monitor election Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625203 calling monitor election Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625201 calling monitor election Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625204 is new leader, mons np0005625204,np0005625202,np0005625203,np0005625201 in quorum (ranks 0,1,2,3) Feb 20 04:46:02 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:46:02 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:02 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:46:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:46:02 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:46:03 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:46:03 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625201"} v 0) Feb 20 04:46:03 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch Feb 20 04:46:03 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:03 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:46:03 localhost ceph-mon[292786]: Removing np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:03 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:46:03 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:46:03 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:46:03 localhost ceph-mon[292786]: Removing np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:46:03 localhost ceph-mon[292786]: Removing np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:46:03 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:03 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:04 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:04 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:46:04 localhost podman[301144]: 2026-02-20 09:46:04.436001809 +0000 UTC m=+0.075183502 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:46:04 localhost podman[301144]: 2026-02-20 09:46:04.44980952 +0000 UTC m=+0.088991233 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:46:04 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:46:04 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:04 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:05 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:05 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:05 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:05 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:05 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:05 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:05 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:05 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:46:05 localhost sshd[301168]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:46:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:46:05.913 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:46:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:46:05.914 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:46:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:46:05.914 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:46:06 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:06 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:06 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:06 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:06 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:06 localhost ceph-mon[292786]: Removing daemon mgr.np0005625201.mtnyvu from np0005625201.localdomain -- ports [8765] Feb 20 04:46:07 localhost ceph-mon[292786]: mon.np0005625202@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.np0005625201.mtnyvu"} v 0) Feb 20 04:46:08 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth rm", "entity": "mgr.np0005625201.mtnyvu"} : dispatch Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command({"prefix": "mon ok-to-stop", "ids": ["np0005625201"]} v 0) Feb 20 04:46:08 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon ok-to-stop", "ids": ["np0005625201"]} : dispatch Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command({"prefix": "quorum_status"} v 0) Feb 20 04:46:08 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "quorum_status"} : dispatch Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e14 handle_command mon_command({"prefix": "mon rm", "name": "np0005625201"} v 0) Feb 20 04:46:08 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon rm", "name": "np0005625201"} : dispatch Feb 20 04:46:08 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x55d2fffb4f20 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(probing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625202"} v 0) Feb 20 04:46:08 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(probing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:46:08 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:46:08 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : mon.np0005625202 calling monitor election Feb 20 04:46:08 localhost ceph-mon[292786]: paxos.1).electionLogic(60) init, last seen epoch 60 Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0) Feb 20 04:46:08 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 20 04:46:08 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 20 04:46:09 localhost ceph-mon[292786]: Removing key for mgr.np0005625201.mtnyvu Feb 20 04:46:09 localhost ceph-mon[292786]: Safe to remove mon.np0005625201: new quorum should be ['np0005625204', 'np0005625202', 'np0005625203'] (from ['np0005625204', 'np0005625202', 'np0005625203']) Feb 20 04:46:09 localhost ceph-mon[292786]: Removing monitor np0005625201 from monmap... Feb 20 04:46:09 localhost ceph-mon[292786]: Removing daemon mon.np0005625201 from np0005625201.localdomain -- ports [] Feb 20 04:46:09 localhost ceph-mon[292786]: mon.np0005625204 calling monitor election Feb 20 04:46:09 localhost ceph-mon[292786]: mon.np0005625203 calling monitor election Feb 20 04:46:09 localhost ceph-mon[292786]: mon.np0005625202 calling monitor election Feb 20 04:46:09 localhost ceph-mon[292786]: mon.np0005625204 is new leader, mons np0005625204,np0005625202,np0005625203 in quorum (ranks 0,1,2) Feb 20 04:46:09 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:46:09 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:09 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:09 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 20 04:46:09 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 20 04:46:09 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:46:09 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.267881) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580770267954, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1274, "num_deletes": 255, "total_data_size": 1571997, "memory_usage": 1602496, "flush_reason": "Manual Compaction"} Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580770276862, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 943671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17000, "largest_seqno": 18269, "table_properties": {"data_size": 938339, "index_size": 2483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15078, "raw_average_key_size": 21, "raw_value_size": 925912, "raw_average_value_size": 1311, "num_data_blocks": 103, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580732, "oldest_key_time": 1771580732, "file_creation_time": 1771580770, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 9043 microseconds, and 4442 cpu microseconds. Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.276930) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 943671 bytes OK Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.276958) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.279293) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.279321) EVENT_LOG_v1 {"time_micros": 1771580770279313, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.279349) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1565260, prev total WAL file size 1565260, number of live WAL files 2. Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.280904) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323733' seq:72057594037927935, type:22 .. '6B760031353239' seq:0, type:0; will stop at (end) Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(921KB)], [27(17MB)] Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580770280985, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 19104835, "oldest_snapshot_seqno": -1} Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 11094 keys, 18043194 bytes, temperature: kUnknown Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580770367018, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 18043194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17980288, "index_size": 34069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 297635, "raw_average_key_size": 26, "raw_value_size": 17791254, "raw_average_value_size": 1603, "num_data_blocks": 1284, "num_entries": 11094, "num_filter_entries": 11094, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771580770, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.367584) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 18043194 bytes Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.369268) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.4 rd, 209.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 17.3 +0.0 blob) out(17.2 +0.0 blob), read-write-amplify(39.4) write-amplify(19.1) OK, records in: 11641, records dropped: 547 output_compression: NoCompression Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.369300) EVENT_LOG_v1 {"time_micros": 1771580770369287, "job": 14, "event": "compaction_finished", "compaction_time_micros": 86301, "compaction_time_cpu_micros": 52874, "output_level": 6, "num_output_files": 1, "total_output_size": 18043194, "num_input_records": 11641, "num_output_records": 11094, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580770370160, "job": 14, "event": "table_file_deletion", "file_number": 29} Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580770373059, "job": 14, "event": "table_file_deletion", "file_number": 27} Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.280761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.373272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.373281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.373284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.373288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:10 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:10.373291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:10 localhost ceph-mon[292786]: Added label _no_schedule to host np0005625201.localdomain Feb 20 04:46:10 localhost ceph-mon[292786]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005625201.localdomain Feb 20 04:46:10 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:10 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:11 localhost nova_compute[280804]: 2026-02-20 09:46:11.115 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:46:11 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain.devices.0}] v 0) Feb 20 04:46:11 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625201.localdomain}] v 0) Feb 20 04:46:11 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:11 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:11 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:46:11 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:46:11 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:11 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:11 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:46:12 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Feb 20 04:46:12 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain"} v 0) Feb 20 04:46:12 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain"} : dispatch Feb 20 04:46:12 localhost ceph-mon[292786]: mon.np0005625202@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:46:12 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:46:12 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:46:12 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:46:12 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:46:12 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:12 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain"} : dispatch Feb 20 04:46:12 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain"} : dispatch Feb 20 04:46:12 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain"}]': finished Feb 20 04:46:12 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:12 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:12 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:12 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:12 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:13 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:13 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:13 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:46:13 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:46:13 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:46:13 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:46:13 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:13 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:13 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:46:13 localhost podman[301576]: 2026-02-20 09:46:13.932498154 +0000 UTC m=+0.090146393 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:46:13 localhost podman[301576]: 2026-02-20 09:46:13.941432464 +0000 UTC m=+0.099080713 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:46:13 localhost ceph-mon[292786]: Removed host np0005625201.localdomain Feb 20 04:46:13 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:13 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:13 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:13 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:13 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:13 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:13 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:13 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:13 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:13 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:13 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:13 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:13 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:46:13 localhost podman[301584]: Feb 20 04:46:13 localhost podman[301584]: 2026-02-20 09:46:13.998350165 +0000 UTC m=+0.132094212 container create 3d4a478483de4f1f17f8061415de094a542f0ea961a96adf90f3f12ea271477d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_panini, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.42.2, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, release=1770267347, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:46:14 localhost systemd[1]: Started libpod-conmon-3d4a478483de4f1f17f8061415de094a542f0ea961a96adf90f3f12ea271477d.scope. Feb 20 04:46:14 localhost podman[301584]: 2026-02-20 09:46:13.965731378 +0000 UTC m=+0.099475465 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:14 localhost systemd[1]: Started libcrun container. Feb 20 04:46:14 localhost podman[301584]: 2026-02-20 09:46:14.088043726 +0000 UTC m=+0.221787763 container init 3d4a478483de4f1f17f8061415de094a542f0ea961a96adf90f3f12ea271477d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_panini, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., ceph=True) Feb 20 04:46:14 localhost podman[301584]: 2026-02-20 09:46:14.099245647 +0000 UTC m=+0.232989684 container start 3d4a478483de4f1f17f8061415de094a542f0ea961a96adf90f3f12ea271477d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_panini, name=rhceph, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:46:14 localhost podman[301584]: 2026-02-20 09:46:14.099542654 +0000 UTC m=+0.233286761 container attach 3d4a478483de4f1f17f8061415de094a542f0ea961a96adf90f3f12ea271477d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_panini, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1770267347, GIT_BRANCH=main, vcs-type=git, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.42.2, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:46:14 localhost boring_panini[301616]: 167 167 Feb 20 04:46:14 localhost podman[301584]: 2026-02-20 09:46:14.104188129 +0000 UTC m=+0.237932186 container died 3d4a478483de4f1f17f8061415de094a542f0ea961a96adf90f3f12ea271477d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_panini, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, name=rhceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z) Feb 20 04:46:14 localhost systemd[1]: libpod-3d4a478483de4f1f17f8061415de094a542f0ea961a96adf90f3f12ea271477d.scope: Deactivated successfully. Feb 20 04:46:14 localhost podman[301621]: 2026-02-20 09:46:14.201929696 +0000 UTC m=+0.084796660 container remove 3d4a478483de4f1f17f8061415de094a542f0ea961a96adf90f3f12ea271477d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_panini, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, GIT_CLEAN=True, version=7, RELEASE=main, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.expose-services=, release=1770267347, distribution-scope=public) Feb 20 04:46:14 localhost systemd[1]: libpod-conmon-3d4a478483de4f1f17f8061415de094a542f0ea961a96adf90f3f12ea271477d.scope: Deactivated successfully. Feb 20 04:46:14 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:14 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:14 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Feb 20 04:46:14 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:46:14 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:14 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:14 localhost podman[301690]: Feb 20 04:46:14 localhost podman[301690]: 2026-02-20 09:46:14.911813347 +0000 UTC m=+0.077994178 container create 731040e8238a757fc1ba720dbb6c265cffe0a492749dabd1ee53aa54ca766109 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_heisenberg, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 04:46:14 localhost systemd[1]: var-lib-containers-storage-overlay-9e4d5a76c520d84849ad916e2ef22b9462071aaf0eee726aa3491ad2b8b3a4ad-merged.mount: Deactivated successfully. Feb 20 04:46:14 localhost systemd[1]: Started libpod-conmon-731040e8238a757fc1ba720dbb6c265cffe0a492749dabd1ee53aa54ca766109.scope. Feb 20 04:46:14 localhost ceph-mon[292786]: Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:46:14 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:46:14 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:14 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:14 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:46:14 localhost systemd[1]: Started libcrun container. Feb 20 04:46:14 localhost podman[301690]: 2026-02-20 09:46:14.973521235 +0000 UTC m=+0.139702066 container init 731040e8238a757fc1ba720dbb6c265cffe0a492749dabd1ee53aa54ca766109 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_heisenberg, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:46:14 localhost podman[301690]: 2026-02-20 09:46:14.880624148 +0000 UTC m=+0.046804999 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:14 localhost podman[301690]: 2026-02-20 09:46:14.982041244 +0000 UTC m=+0.148222065 container start 731040e8238a757fc1ba720dbb6c265cffe0a492749dabd1ee53aa54ca766109 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_heisenberg, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1770267347, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True) Feb 20 04:46:14 localhost podman[301690]: 2026-02-20 09:46:14.982316081 +0000 UTC m=+0.148496912 container attach 731040e8238a757fc1ba720dbb6c265cffe0a492749dabd1ee53aa54ca766109 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_heisenberg, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, release=1770267347, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, RELEASE=main, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-type=git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:46:14 localhost friendly_heisenberg[301704]: 167 167 Feb 20 04:46:14 localhost systemd[1]: libpod-731040e8238a757fc1ba720dbb6c265cffe0a492749dabd1ee53aa54ca766109.scope: Deactivated successfully. Feb 20 04:46:14 localhost podman[301690]: 2026-02-20 09:46:14.987288575 +0000 UTC m=+0.153469466 container died 731040e8238a757fc1ba720dbb6c265cffe0a492749dabd1ee53aa54ca766109 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_heisenberg, version=7, com.redhat.component=rhceph-container, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, release=1770267347, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, architecture=x86_64) Feb 20 04:46:15 localhost podman[301709]: 2026-02-20 09:46:15.077017657 +0000 UTC m=+0.083190217 container remove 731040e8238a757fc1ba720dbb6c265cffe0a492749dabd1ee53aa54ca766109 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_heisenberg, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, release=1770267347) Feb 20 04:46:15 localhost systemd[1]: libpod-conmon-731040e8238a757fc1ba720dbb6c265cffe0a492749dabd1ee53aa54ca766109.scope: Deactivated successfully. Feb 20 04:46:15 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:15 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:15 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Feb 20 04:46:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:46:15 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:15 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:15 localhost podman[301783]: Feb 20 04:46:15 localhost podman[301783]: 2026-02-20 09:46:15.906360478 +0000 UTC m=+0.077877235 container create 25edd5fdbfac7832efdbecb35c3cce52d8739ca6ba192d718b1ae87a589ba9cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_davinci, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Feb 20 04:46:15 localhost systemd[1]: var-lib-containers-storage-overlay-819e7a5a83a97d89bc283507cc9757921007af6cb6afb7c05b4b0eca3f47a42e-merged.mount: Deactivated successfully. Feb 20 04:46:15 localhost systemd[1]: Started libpod-conmon-25edd5fdbfac7832efdbecb35c3cce52d8739ca6ba192d718b1ae87a589ba9cf.scope. Feb 20 04:46:15 localhost systemd[1]: Started libcrun container. Feb 20 04:46:15 localhost ceph-mon[292786]: Reconfiguring osd.2 (monmap changed)... Feb 20 04:46:15 localhost ceph-mon[292786]: Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:46:15 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:15 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:15 localhost ceph-mon[292786]: Reconfiguring osd.5 (monmap changed)... Feb 20 04:46:15 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:46:15 localhost ceph-mon[292786]: Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:46:15 localhost podman[301783]: 2026-02-20 09:46:15.972869525 +0000 UTC m=+0.144386292 container init 25edd5fdbfac7832efdbecb35c3cce52d8739ca6ba192d718b1ae87a589ba9cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_davinci, distribution-scope=public, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux ) Feb 20 04:46:15 localhost podman[301783]: 2026-02-20 09:46:15.877690386 +0000 UTC m=+0.049207203 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:15 localhost podman[301783]: 2026-02-20 09:46:15.980221353 +0000 UTC m=+0.151738080 container start 25edd5fdbfac7832efdbecb35c3cce52d8739ca6ba192d718b1ae87a589ba9cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_davinci, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, distribution-scope=public, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:46:15 localhost podman[301783]: 2026-02-20 09:46:15.980713306 +0000 UTC m=+0.152230073 container attach 25edd5fdbfac7832efdbecb35c3cce52d8739ca6ba192d718b1ae87a589ba9cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_davinci, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, name=rhceph, architecture=x86_64, build-date=2026-02-09T10:25:24Z) Feb 20 04:46:15 localhost sharp_davinci[301799]: 167 167 Feb 20 04:46:15 localhost systemd[1]: libpod-25edd5fdbfac7832efdbecb35c3cce52d8739ca6ba192d718b1ae87a589ba9cf.scope: Deactivated successfully. Feb 20 04:46:15 localhost podman[301783]: 2026-02-20 09:46:15.983443709 +0000 UTC m=+0.154960476 container died 25edd5fdbfac7832efdbecb35c3cce52d8739ca6ba192d718b1ae87a589ba9cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_davinci, GIT_BRANCH=main, release=1770267347, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, distribution-scope=public, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7) Feb 20 04:46:16 localhost podman[301804]: 2026-02-20 09:46:16.064325203 +0000 UTC m=+0.069462508 container remove 25edd5fdbfac7832efdbecb35c3cce52d8739ca6ba192d718b1ae87a589ba9cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_davinci, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, GIT_BRANCH=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1770267347, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Feb 20 04:46:16 localhost systemd[1]: libpod-conmon-25edd5fdbfac7832efdbecb35c3cce52d8739ca6ba192d718b1ae87a589ba9cf.scope: Deactivated successfully. Feb 20 04:46:16 localhost podman[241347]: time="2026-02-20T09:46:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:46:16 localhost podman[241347]: @ - - [20/Feb/2026:09:46:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:46:16 localhost podman[241347]: @ - - [20/Feb/2026:09:46:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18751 "" "Go-http-client/1.1" Feb 20 04:46:16 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:16 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:16 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:46:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:16 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:16 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:16 localhost podman[301880]: Feb 20 04:46:16 localhost podman[301880]: 2026-02-20 09:46:16.847706869 +0000 UTC m=+0.074175485 container create d5629c68b61d2d76e0497591d6ac27893e5a0fcd3d0b3c326da3a0c0c01c0f93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_turing, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-type=git, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, version=7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux ) Feb 20 04:46:16 localhost systemd[1]: Started libpod-conmon-d5629c68b61d2d76e0497591d6ac27893e5a0fcd3d0b3c326da3a0c0c01c0f93.scope. Feb 20 04:46:16 localhost systemd[1]: Started libcrun container. Feb 20 04:46:16 localhost podman[301880]: 2026-02-20 09:46:16.906959941 +0000 UTC m=+0.133428557 container init d5629c68b61d2d76e0497591d6ac27893e5a0fcd3d0b3c326da3a0c0c01c0f93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_turing, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , version=7) Feb 20 04:46:16 localhost podman[301880]: 2026-02-20 09:46:16.915654035 +0000 UTC m=+0.142122661 container start d5629c68b61d2d76e0497591d6ac27893e5a0fcd3d0b3c326da3a0c0c01c0f93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_turing, version=7, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, release=1770267347, ceph=True, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.expose-services=) Feb 20 04:46:16 localhost podman[301880]: 2026-02-20 09:46:16.916030085 +0000 UTC m=+0.142498751 container attach d5629c68b61d2d76e0497591d6ac27893e5a0fcd3d0b3c326da3a0c0c01c0f93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_turing, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.42.2) Feb 20 04:46:16 localhost podman[301880]: 2026-02-20 09:46:16.817597699 +0000 UTC m=+0.044066345 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:16 localhost frosty_turing[301895]: 167 167 Feb 20 04:46:16 localhost systemd[1]: libpod-d5629c68b61d2d76e0497591d6ac27893e5a0fcd3d0b3c326da3a0c0c01c0f93.scope: Deactivated successfully. Feb 20 04:46:16 localhost podman[301880]: 2026-02-20 09:46:16.920165387 +0000 UTC m=+0.146634003 container died d5629c68b61d2d76e0497591d6ac27893e5a0fcd3d0b3c326da3a0c0c01c0f93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_turing, GIT_CLEAN=True, ceph=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:46:16 localhost systemd[1]: var-lib-containers-storage-overlay-305a9dbb5b093882da4e230643faaa7fc463469ce32bcb6a0dc5bc0684c2d114-merged.mount: Deactivated successfully. Feb 20 04:46:17 localhost systemd[1]: var-lib-containers-storage-overlay-f9838180cabcec947bdff28f2836a2391bd0a2ad1c9fb2cf42c6171ebdd799e2-merged.mount: Deactivated successfully. Feb 20 04:46:17 localhost podman[301900]: 2026-02-20 09:46:17.017364018 +0000 UTC m=+0.088021206 container remove d5629c68b61d2d76e0497591d6ac27893e5a0fcd3d0b3c326da3a0c0c01c0f93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_turing, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, ceph=True, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, release=1770267347, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:46:17 localhost systemd[1]: libpod-conmon-d5629c68b61d2d76e0497591d6ac27893e5a0fcd3d0b3c326da3a0c0c01c0f93.scope: Deactivated successfully. Feb 20 04:46:17 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:17 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:17 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:46:17 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:17 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 20 04:46:17 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr services"} : dispatch Feb 20 04:46:17 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:17 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:17 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:17 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:17 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:17 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:46:17 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:17 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:46:17 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:17 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:17 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:17 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:17 localhost ceph-mon[292786]: mon.np0005625202@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:46:17 localhost podman[301968]: Feb 20 04:46:17 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:46:17 localhost podman[301968]: 2026-02-20 09:46:17.689656008 +0000 UTC m=+0.078875071 container create f880b9e3ce0493dcc5267b89b9fcb82599c7d04d7dde1ddcc5f9eb202344ab3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_yalow, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=1770267347, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:46:17 localhost systemd[1]: Started libpod-conmon-f880b9e3ce0493dcc5267b89b9fcb82599c7d04d7dde1ddcc5f9eb202344ab3e.scope. Feb 20 04:46:17 localhost systemd[1]: Started libcrun container. Feb 20 04:46:17 localhost podman[301968]: 2026-02-20 09:46:17.75295716 +0000 UTC m=+0.142176223 container init f880b9e3ce0493dcc5267b89b9fcb82599c7d04d7dde1ddcc5f9eb202344ab3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_yalow, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, name=rhceph, CEPH_POINT_RELEASE=, release=1770267347, distribution-scope=public, vcs-type=git, ceph=True, build-date=2026-02-09T10:25:24Z, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph) Feb 20 04:46:17 localhost podman[301968]: 2026-02-20 09:46:17.658650935 +0000 UTC m=+0.047870058 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:17 localhost podman[301968]: 2026-02-20 09:46:17.766051442 +0000 UTC m=+0.155270505 container start f880b9e3ce0493dcc5267b89b9fcb82599c7d04d7dde1ddcc5f9eb202344ab3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_yalow, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.42.2, name=rhceph, architecture=x86_64, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:46:17 localhost podman[301968]: 2026-02-20 09:46:17.76633673 +0000 UTC m=+0.155555753 container attach f880b9e3ce0493dcc5267b89b9fcb82599c7d04d7dde1ddcc5f9eb202344ab3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_yalow, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.42.2, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, distribution-scope=public, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:46:17 localhost serene_yalow[301981]: 167 167 Feb 20 04:46:17 localhost systemd[1]: libpod-f880b9e3ce0493dcc5267b89b9fcb82599c7d04d7dde1ddcc5f9eb202344ab3e.scope: Deactivated successfully. Feb 20 04:46:17 localhost podman[301968]: 2026-02-20 09:46:17.769823703 +0000 UTC m=+0.159042816 container died f880b9e3ce0493dcc5267b89b9fcb82599c7d04d7dde1ddcc5f9eb202344ab3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_yalow, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1770267347, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.42.2, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=) Feb 20 04:46:17 localhost podman[301988]: 2026-02-20 09:46:17.863243634 +0000 UTC m=+0.081400409 container remove f880b9e3ce0493dcc5267b89b9fcb82599c7d04d7dde1ddcc5f9eb202344ab3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_yalow, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7) Feb 20 04:46:17 localhost systemd[1]: libpod-conmon-f880b9e3ce0493dcc5267b89b9fcb82599c7d04d7dde1ddcc5f9eb202344ab3e.scope: Deactivated successfully. Feb 20 04:46:17 localhost systemd[1]: var-lib-containers-storage-overlay-0e2155efe59aa36ca5115211170efcd1a74787b7a466c0f9cf4d6d05f1599413-merged.mount: Deactivated successfully. Feb 20 04:46:17 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:17 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:17 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Feb 20 04:46:17 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:46:17 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Feb 20 04:46:17 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Feb 20 04:46:18 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:18 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:18 localhost podman[302057]: Feb 20 04:46:18 localhost podman[302057]: 2026-02-20 09:46:18.589921166 +0000 UTC m=+0.074771941 container create e37da11d305cc845a53844a797d5cb6a473122590e235db229b2895e19f88413 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_elion, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, distribution-scope=public, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, version=7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=1770267347, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:46:18 localhost systemd[1]: Started libpod-conmon-e37da11d305cc845a53844a797d5cb6a473122590e235db229b2895e19f88413.scope. Feb 20 04:46:18 localhost systemd[1]: Started libcrun container. Feb 20 04:46:18 localhost podman[302057]: 2026-02-20 09:46:18.55957078 +0000 UTC m=+0.044421585 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:18 localhost podman[302057]: 2026-02-20 09:46:18.660764629 +0000 UTC m=+0.145615414 container init e37da11d305cc845a53844a797d5cb6a473122590e235db229b2895e19f88413 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_elion, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, distribution-scope=public, version=7, ceph=True, name=rhceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=) Feb 20 04:46:18 localhost podman[302057]: 2026-02-20 09:46:18.670616034 +0000 UTC m=+0.155466809 container start e37da11d305cc845a53844a797d5cb6a473122590e235db229b2895e19f88413 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_elion, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, architecture=x86_64, build-date=2026-02-09T10:25:24Z, vcs-type=git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, release=1770267347, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph) Feb 20 04:46:18 localhost podman[302057]: 2026-02-20 09:46:18.670904042 +0000 UTC m=+0.155754857 container attach e37da11d305cc845a53844a797d5cb6a473122590e235db229b2895e19f88413 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_elion, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, RELEASE=main, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, ceph=True, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:46:18 localhost sharp_elion[302072]: 167 167 Feb 20 04:46:18 localhost systemd[1]: libpod-e37da11d305cc845a53844a797d5cb6a473122590e235db229b2895e19f88413.scope: Deactivated successfully. Feb 20 04:46:18 localhost podman[302057]: 2026-02-20 09:46:18.67344022 +0000 UTC m=+0.158291025 container died e37da11d305cc845a53844a797d5cb6a473122590e235db229b2895e19f88413 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_elion, vcs-type=git, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.42.2, vendor=Red Hat, Inc., name=rhceph, release=1770267347, GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, RELEASE=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:46:18 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:46:18 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:46:18 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:18 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:18 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:18 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:46:18 localhost podman[302077]: 2026-02-20 09:46:18.76495094 +0000 UTC m=+0.082654043 container remove e37da11d305cc845a53844a797d5cb6a473122590e235db229b2895e19f88413 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_elion, ceph=True, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, name=rhceph, maintainer=Guillaume Abrioux , version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=1770267347, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7) Feb 20 04:46:18 localhost systemd[1]: libpod-conmon-e37da11d305cc845a53844a797d5cb6a473122590e235db229b2895e19f88413.scope: Deactivated successfully. Feb 20 04:46:18 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:18 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:18 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:46:18 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:18 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:18 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:18 localhost systemd[1]: var-lib-containers-storage-overlay-ff2a5c0cb3f9431de0041c795a8625b1977c098531df3498235ccc4ce1efdcec-merged.mount: Deactivated successfully. Feb 20 04:46:19 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:19 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:19 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Feb 20 04:46:19 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:46:19 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:19 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:19 localhost ceph-mon[292786]: Reconfiguring mon.np0005625202 (monmap changed)... Feb 20 04:46:19 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:46:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:19 localhost ceph-mon[292786]: Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:46:19 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:19 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:46:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:19 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:19 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:46:20 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:20 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:20 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Feb 20 04:46:20 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:46:20 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:20 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:20 localhost ceph-mon[292786]: Reconfiguring osd.1 (monmap changed)... Feb 20 04:46:20 localhost ceph-mon[292786]: Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:46:20 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:20 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:20 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:46:21 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 20 04:46:21 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:21 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:21 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:46:21 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:21 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:21 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:21 localhost ceph-mon[292786]: Reconfiguring osd.4 (monmap changed)... Feb 20 04:46:21 localhost ceph-mon[292786]: Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:46:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:21 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:21 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:22 localhost ceph-mon[292786]: mon.np0005625202@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:46:22 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:22 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:22 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:46:22 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:22 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "mgr services"} v 0) Feb 20 04:46:22 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr services"} : dispatch Feb 20 04:46:22 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:22 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:22 localhost ceph-mon[292786]: Saving service mon spec with placement label:mon Feb 20 04:46:22 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)... Feb 20 04:46:22 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain Feb 20 04:46:22 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:22 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:22 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:22 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:23 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:23 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:23 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:46:23 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:23 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:46:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:46:23 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)... Feb 20 04:46:23 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain Feb 20 04:46:23 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:23 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:23 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:23 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:46:24 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "quorum_status"} v 0) Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "quorum_status"} : dispatch Feb 20 04:46:24 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e15 handle_command mon_command({"prefix": "mon rm", "name": "np0005625204"} v 0) Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon rm", "name": "np0005625204"} : dispatch Feb 20 04:46:24 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x55d2fffb51e0 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Feb 20 04:46:24 localhost ceph-mon[292786]: mon.np0005625202@1(peon) e16 my rank is now 0 (was 1) Feb 20 04:46:24 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 20 04:46:24 localhost ceph-mgr[286565]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Feb 20 04:46:24 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x55d2fffb5600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Feb 20 04:46:24 localhost podman[302094]: 2026-02-20 09:46:24.462646671 +0000 UTC m=+0.093549824 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=) Feb 20 04:46:24 localhost podman[302094]: 2026-02-20 09:46:24.48267332 +0000 UTC m=+0.113576473 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.7, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, release=1770267347, managed_by=edpm_ansible, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.buildah.version=1.33.7, config_id=openstack_network_exporter) Feb 20 04:46:24 localhost podman[302095]: 2026-02-20 09:46:24.523201929 +0000 UTC m=+0.154106192 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 20 04:46:24 localhost podman[302095]: 2026-02-20 09:46:24.533411363 +0000 UTC m=+0.164315626 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 20 04:46:24 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : mon.np0005625202 calling monitor election Feb 20 04:46:24 localhost ceph-mon[292786]: paxos.0).electionLogic(62) init, last seen epoch 62 Feb 20 04:46:24 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:46:24 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e16 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : mon.np0005625202 is new leader, mons np0005625202,np0005625203 in quorum (ranks 0,1) Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : monmap epoch 16 Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : last_changed 2026-02-20T09:46:24.360760+0000 Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : created 2026-02-20T07:36:51.191305+0000 Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : election_strategy: 1 Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202 Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203 Feb 20 04:46:24 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e36: np0005625203.lonygy(active, since 68s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu, np0005625202.arwxwo Feb 20 04:46:24 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : overall HEALTH_OK Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:24 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:24 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:24 localhost ceph-mon[292786]: Reconfiguring crash.np0005625204 (monmap changed)... Feb 20 04:46:24 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain Feb 20 04:46:24 localhost ceph-mon[292786]: Remove daemons mon.np0005625204 Feb 20 04:46:24 localhost ceph-mon[292786]: Safe to remove mon.np0005625204: new quorum should be ['np0005625202', 'np0005625203'] (from ['np0005625202', 'np0005625203']) Feb 20 04:46:24 localhost ceph-mon[292786]: Removing monitor np0005625204 from monmap... Feb 20 04:46:24 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon rm", "name": "np0005625204"} : dispatch Feb 20 04:46:24 localhost ceph-mon[292786]: Removing daemon mon.np0005625204 from np0005625204.localdomain -- ports [] Feb 20 04:46:24 localhost ceph-mon[292786]: mon.np0005625203 calling monitor election Feb 20 04:46:24 localhost ceph-mon[292786]: mon.np0005625202 calling monitor election Feb 20 04:46:24 localhost ceph-mon[292786]: mon.np0005625202 is new leader, mons np0005625202,np0005625203 in quorum (ranks 0,1) Feb 20 04:46:24 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:46:24 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:24 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:24 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 20 04:46:25 localhost ceph-mon[292786]: Reconfiguring osd.0 (monmap changed)... Feb 20 04:46:25 localhost ceph-mon[292786]: Reconfiguring daemon osd.0 on np0005625204.localdomain Feb 20 04:46:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:26 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:26 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:46:27 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:27 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:27 localhost ceph-mon[292786]: Reconfiguring osd.3 (monmap changed)... Feb 20 04:46:27 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 20 04:46:27 localhost ceph-mon[292786]: Reconfiguring daemon osd.3 on np0005625204.localdomain Feb 20 04:46:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:46:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:27 localhost sshd[302132]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:46:28 localhost openstack_network_exporter[243776]: ERROR 09:46:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:46:28 localhost openstack_network_exporter[243776]: Feb 20 04:46:28 localhost openstack_network_exporter[243776]: ERROR 09:46:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:46:28 localhost openstack_network_exporter[243776]: Feb 20 04:46:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:28 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:28 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:46:28 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:28 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:28 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:28 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)... Feb 20 04:46:28 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:28 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:28 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain Feb 20 04:46:28 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:28 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:28 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:28 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:46:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:46:29 localhost podman[302153]: 2026-02-20 09:46:29.447418791 +0000 UTC m=+0.088670653 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent) Feb 20 04:46:29 localhost podman[302153]: 2026-02-20 09:46:29.478090346 +0000 UTC m=+0.119342188 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3) Feb 20 04:46:29 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:46:29 localhost podman[302152]: 2026-02-20 09:46:29.500608591 +0000 UTC m=+0.143672652 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true) Feb 20 04:46:29 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)... Feb 20 04:46:29 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain Feb 20 04:46:29 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:29 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:29 localhost podman[302152]: 2026-02-20 09:46:29.6099411 +0000 UTC m=+0.253005191 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:46:29 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:46:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:31 localhost sshd[302244]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:46:31 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:31 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:31 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:46:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:46:32 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:46:32 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:46:32 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:46:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:46:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:46:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:33 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:33 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:33 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:33 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:33 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:33 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:34 localhost podman[302637]: Feb 20 04:46:34 localhost podman[302637]: 2026-02-20 09:46:34.206617069 +0000 UTC m=+0.080873805 container create fda394dc38bd7812284f791a995bbe9592c4978720737b61f0a5937c2e63fd6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_ardinghelli, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, ceph=True, release=1770267347, distribution-scope=public, io.openshift.tags=rhceph ceph) Feb 20 04:46:34 localhost systemd[1]: Started libpod-conmon-fda394dc38bd7812284f791a995bbe9592c4978720737b61f0a5937c2e63fd6a.scope. Feb 20 04:46:34 localhost systemd[1]: Started libcrun container. Feb 20 04:46:34 localhost podman[302637]: 2026-02-20 09:46:34.174008353 +0000 UTC m=+0.048265069 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:34 localhost podman[302637]: 2026-02-20 09:46:34.287977096 +0000 UTC m=+0.162233782 container init fda394dc38bd7812284f791a995bbe9592c4978720737b61f0a5937c2e63fd6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_ardinghelli, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.42.2, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, RELEASE=main, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:46:34 localhost podman[302637]: 2026-02-20 09:46:34.29930302 +0000 UTC m=+0.173559706 container start fda394dc38bd7812284f791a995bbe9592c4978720737b61f0a5937c2e63fd6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_ardinghelli, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, maintainer=Guillaume Abrioux , ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, vendor=Red Hat, Inc.) Feb 20 04:46:34 localhost podman[302637]: 2026-02-20 09:46:34.299722912 +0000 UTC m=+0.173979648 container attach fda394dc38bd7812284f791a995bbe9592c4978720737b61f0a5937c2e63fd6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_ardinghelli, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Feb 20 04:46:34 localhost practical_ardinghelli[302652]: 167 167 Feb 20 04:46:34 localhost systemd[1]: libpod-fda394dc38bd7812284f791a995bbe9592c4978720737b61f0a5937c2e63fd6a.scope: Deactivated successfully. Feb 20 04:46:34 localhost podman[302637]: 2026-02-20 09:46:34.303601106 +0000 UTC m=+0.177857832 container died fda394dc38bd7812284f791a995bbe9592c4978720737b61f0a5937c2e63fd6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_ardinghelli, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , name=rhceph, GIT_CLEAN=True) Feb 20 04:46:34 localhost podman[302657]: 2026-02-20 09:46:34.400339356 +0000 UTC m=+0.088150210 container remove fda394dc38bd7812284f791a995bbe9592c4978720737b61f0a5937c2e63fd6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_ardinghelli, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, ceph=True, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1770267347, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc.) Feb 20 04:46:34 localhost systemd[1]: libpod-conmon-fda394dc38bd7812284f791a995bbe9592c4978720737b61f0a5937c2e63fd6a.scope: Deactivated successfully. Feb 20 04:46:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:34 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:34 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:46:34 localhost podman[302694]: 2026-02-20 09:46:34.685915981 +0000 UTC m=+0.084257485 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:46:34 localhost podman[302694]: 2026-02-20 09:46:34.701885551 +0000 UTC m=+0.100227115 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:46:34 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:46:34 localhost ceph-mon[292786]: Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:46:34 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:46:34 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:34 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:34 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:46:35 localhost podman[302753]: Feb 20 04:46:35 localhost podman[302753]: 2026-02-20 09:46:35.157421345 +0000 UTC m=+0.064313350 container create b2063866270d51202e950b57e494bb26bb77eac411d7e988c82f4b3bf897f197 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_maxwell, name=rhceph, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:46:35 localhost systemd[1]: Started libpod-conmon-b2063866270d51202e950b57e494bb26bb77eac411d7e988c82f4b3bf897f197.scope. Feb 20 04:46:35 localhost systemd[1]: var-lib-containers-storage-overlay-afe8bd78f068e2efae9f37751d06e14d63c0051415ed54d7db0db21e53f172df-merged.mount: Deactivated successfully. Feb 20 04:46:35 localhost systemd[1]: Started libcrun container. Feb 20 04:46:35 localhost podman[302753]: 2026-02-20 09:46:35.233917051 +0000 UTC m=+0.140809056 container init b2063866270d51202e950b57e494bb26bb77eac411d7e988c82f4b3bf897f197 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_maxwell, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , name=rhceph, ceph=True, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:46:35 localhost podman[302753]: 2026-02-20 09:46:35.134720575 +0000 UTC m=+0.041612570 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:35 localhost podman[302753]: 2026-02-20 09:46:35.243936761 +0000 UTC m=+0.150828796 container start b2063866270d51202e950b57e494bb26bb77eac411d7e988c82f4b3bf897f197 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_maxwell, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:46:35 localhost podman[302753]: 2026-02-20 09:46:35.244195537 +0000 UTC m=+0.151087562 container attach b2063866270d51202e950b57e494bb26bb77eac411d7e988c82f4b3bf897f197 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_maxwell, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, build-date=2026-02-09T10:25:24Z, distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, name=rhceph, ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7) Feb 20 04:46:35 localhost crazy_maxwell[302768]: 167 167 Feb 20 04:46:35 localhost systemd[1]: libpod-b2063866270d51202e950b57e494bb26bb77eac411d7e988c82f4b3bf897f197.scope: Deactivated successfully. Feb 20 04:46:35 localhost podman[302753]: 2026-02-20 09:46:35.246763357 +0000 UTC m=+0.153655422 container died b2063866270d51202e950b57e494bb26bb77eac411d7e988c82f4b3bf897f197 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_maxwell, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, GIT_BRANCH=main, vendor=Red Hat, Inc., io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, name=rhceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, distribution-scope=public) Feb 20 04:46:35 localhost systemd[1]: var-lib-containers-storage-overlay-83d072b81d834c8ab1e973fd1dfdad5f3b5822b82723c7116c7ebe47469ce1e8-merged.mount: Deactivated successfully. Feb 20 04:46:35 localhost podman[302773]: 2026-02-20 09:46:35.334102424 +0000 UTC m=+0.078244844 container remove b2063866270d51202e950b57e494bb26bb77eac411d7e988c82f4b3bf897f197 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_maxwell, version=7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, ceph=True, distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:46:35 localhost systemd[1]: libpod-conmon-b2063866270d51202e950b57e494bb26bb77eac411d7e988c82f4b3bf897f197.scope: Deactivated successfully. Feb 20 04:46:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:35 localhost ceph-mon[292786]: Reconfiguring osd.2 (monmap changed)... Feb 20 04:46:35 localhost ceph-mon[292786]: Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:46:35 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:35 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:35 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:46:36 localhost podman[302849]: Feb 20 04:46:36 localhost podman[302849]: 2026-02-20 09:46:36.06800415 +0000 UTC m=+0.057655712 container create 69116d6e76a3e2de28d927d3bd04d4aad7aea3186bdd45010a8a42d2dccfe869 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_diffie, description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, architecture=x86_64, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.tags=rhceph ceph) Feb 20 04:46:36 localhost systemd[1]: Started libpod-conmon-69116d6e76a3e2de28d927d3bd04d4aad7aea3186bdd45010a8a42d2dccfe869.scope. Feb 20 04:46:36 localhost systemd[1]: Started libcrun container. Feb 20 04:46:36 localhost podman[302849]: 2026-02-20 09:46:36.123576843 +0000 UTC m=+0.113228395 container init 69116d6e76a3e2de28d927d3bd04d4aad7aea3186bdd45010a8a42d2dccfe869 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_diffie, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, release=1770267347, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:46:36 localhost podman[302849]: 2026-02-20 09:46:36.037066568 +0000 UTC m=+0.026718140 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:36 localhost lucid_diffie[302864]: 167 167 Feb 20 04:46:36 localhost systemd[1]: libpod-69116d6e76a3e2de28d927d3bd04d4aad7aea3186bdd45010a8a42d2dccfe869.scope: Deactivated successfully. Feb 20 04:46:36 localhost podman[302849]: 2026-02-20 09:46:36.150319422 +0000 UTC m=+0.139970964 container start 69116d6e76a3e2de28d927d3bd04d4aad7aea3186bdd45010a8a42d2dccfe869 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_diffie, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=) Feb 20 04:46:36 localhost podman[302849]: 2026-02-20 09:46:36.150757543 +0000 UTC m=+0.140409085 container attach 69116d6e76a3e2de28d927d3bd04d4aad7aea3186bdd45010a8a42d2dccfe869 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_diffie, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, version=7, vendor=Red Hat, Inc., release=1770267347, ceph=True, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z) Feb 20 04:46:36 localhost podman[302849]: 2026-02-20 09:46:36.153860916 +0000 UTC m=+0.143512458 container died 69116d6e76a3e2de28d927d3bd04d4aad7aea3186bdd45010a8a42d2dccfe869 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_diffie, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, vcs-type=git, build-date=2026-02-09T10:25:24Z, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main) Feb 20 04:46:36 localhost systemd[1]: var-lib-containers-storage-overlay-14b466892eafdccb3313420ba24546d8511e8df95caf1ad2c9392d1ce19e1a61-merged.mount: Deactivated successfully. Feb 20 04:46:36 localhost podman[302869]: 2026-02-20 09:46:36.248758697 +0000 UTC m=+0.086766503 container remove 69116d6e76a3e2de28d927d3bd04d4aad7aea3186bdd45010a8a42d2dccfe869 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_diffie, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=1770267347, RELEASE=main) Feb 20 04:46:36 localhost systemd[1]: libpod-conmon-69116d6e76a3e2de28d927d3bd04d4aad7aea3186bdd45010a8a42d2dccfe869.scope: Deactivated successfully. Feb 20 04:46:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:36 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:36 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:46:36 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:36 localhost sshd[302941]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:46:36 localhost ceph-mon[292786]: Reconfiguring osd.5 (monmap changed)... Feb 20 04:46:36 localhost ceph-mon[292786]: Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:46:36 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:36 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:36 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:46:36 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:36 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:36 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:46:37 localhost podman[302947]: Feb 20 04:46:37 localhost podman[302947]: 2026-02-20 09:46:37.04802996 +0000 UTC m=+0.065555763 container create 690797c901cc626368701c9e370bcaa4035722d4569e46f95f28035bacc13f00 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_archimedes, architecture=x86_64, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, release=1770267347, io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, distribution-scope=public, ceph=True, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux ) Feb 20 04:46:37 localhost systemd[1]: Started libpod-conmon-690797c901cc626368701c9e370bcaa4035722d4569e46f95f28035bacc13f00.scope. Feb 20 04:46:37 localhost systemd[1]: Started libcrun container. Feb 20 04:46:37 localhost podman[302947]: 2026-02-20 09:46:37.11237129 +0000 UTC m=+0.129897093 container init 690797c901cc626368701c9e370bcaa4035722d4569e46f95f28035bacc13f00 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_archimedes, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., RELEASE=main, version=7, name=rhceph, io.openshift.tags=rhceph ceph, distribution-scope=public, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Feb 20 04:46:37 localhost podman[302947]: 2026-02-20 09:46:37.016735519 +0000 UTC m=+0.034261352 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:37 localhost podman[302947]: 2026-02-20 09:46:37.122225144 +0000 UTC m=+0.139750977 container start 690797c901cc626368701c9e370bcaa4035722d4569e46f95f28035bacc13f00 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_archimedes, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git) Feb 20 04:46:37 localhost podman[302947]: 2026-02-20 09:46:37.122777459 +0000 UTC m=+0.140303302 container attach 690797c901cc626368701c9e370bcaa4035722d4569e46f95f28035bacc13f00 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_archimedes, name=rhceph, distribution-scope=public, io.buildah.version=1.42.2, io.openshift.expose-services=, RELEASE=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:46:37 localhost strange_archimedes[302962]: 167 167 Feb 20 04:46:37 localhost systemd[1]: libpod-690797c901cc626368701c9e370bcaa4035722d4569e46f95f28035bacc13f00.scope: Deactivated successfully. Feb 20 04:46:37 localhost podman[302947]: 2026-02-20 09:46:37.126130649 +0000 UTC m=+0.143656482 container died 690797c901cc626368701c9e370bcaa4035722d4569e46f95f28035bacc13f00 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_archimedes, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.openshift.tags=rhceph ceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, release=1770267347, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:46:37 localhost systemd[1]: var-lib-containers-storage-overlay-2fb93ef45842bf7d64fe70e197c470cf9622bd47e675980d9207f26a4518de9d-merged.mount: Deactivated successfully. Feb 20 04:46:37 localhost podman[302967]: 2026-02-20 09:46:37.221783791 +0000 UTC m=+0.087306929 container remove 690797c901cc626368701c9e370bcaa4035722d4569e46f95f28035bacc13f00 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_archimedes, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, distribution-scope=public, name=rhceph, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., version=7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True) Feb 20 04:46:37 localhost systemd[1]: libpod-conmon-690797c901cc626368701c9e370bcaa4035722d4569e46f95f28035bacc13f00.scope: Deactivated successfully. Feb 20 04:46:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.293909) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580797293993, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1268, "num_deletes": 257, "total_data_size": 1935753, "memory_usage": 1960208, "flush_reason": "Manual Compaction"} Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Feb 20 04:46:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580797306090, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 1274768, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18274, "largest_seqno": 19537, "table_properties": {"data_size": 1268917, "index_size": 3001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15418, "raw_average_key_size": 21, "raw_value_size": 1255990, "raw_average_value_size": 1756, "num_data_blocks": 131, "num_entries": 715, "num_filter_entries": 715, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580770, "oldest_key_time": 1771580770, "file_creation_time": 1771580797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 12221 microseconds, and 4403 cpu microseconds. Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.306139) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 1274768 bytes OK Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.306161) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.308448) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.308466) EVENT_LOG_v1 {"time_micros": 1771580797308461, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.308489) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1929224, prev total WAL file size 1929548, number of live WAL files 2. Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.309175) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353139' seq:72057594037927935, type:22 .. '6C6F676D0033373732' seq:0, type:0; will stop at (end) Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(1244KB)], [30(17MB)] Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580797309216, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 19317962, "oldest_snapshot_seqno": -1} Feb 20 04:46:37 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:37 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:46:37 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 11265 keys, 19176918 bytes, temperature: kUnknown Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580797394510, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 19176918, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19111195, "index_size": 36438, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 303020, "raw_average_key_size": 26, "raw_value_size": 18917542, "raw_average_value_size": 1679, "num_data_blocks": 1386, "num_entries": 11265, "num_filter_entries": 11265, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771580797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.394834) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 19176918 bytes Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.396527) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.2 rd, 224.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 17.2 +0.0 blob) out(18.3 +0.0 blob), read-write-amplify(30.2) write-amplify(15.0) OK, records in: 11809, records dropped: 544 output_compression: NoCompression Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.396555) EVENT_LOG_v1 {"time_micros": 1771580797396543, "job": 16, "event": "compaction_finished", "compaction_time_micros": 85394, "compaction_time_cpu_micros": 47942, "output_level": 6, "num_output_files": 1, "total_output_size": 19176918, "num_input_records": 11809, "num_output_records": 11265, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580797396883, "job": 16, "event": "table_file_deletion", "file_number": 32} Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580797399664, "job": 16, "event": "table_file_deletion", "file_number": 30} Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.309106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.399756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.399762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.399765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.399767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:37 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:37.399769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 20 04:46:37 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:46:37 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:37 localhost podman[303036]: Feb 20 04:46:37 localhost podman[303036]: 2026-02-20 09:46:37.929344078 +0000 UTC m=+0.076249651 container create 0b64aa634b0d2a7d6433c13595bbee16a849f7178b6624bb5bf096b618bd2ca3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_bassi, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux , vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, ceph=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:46:37 localhost systemd[1]: Started libpod-conmon-0b64aa634b0d2a7d6433c13595bbee16a849f7178b6624bb5bf096b618bd2ca3.scope. Feb 20 04:46:37 localhost systemd[1]: Started libcrun container. Feb 20 04:46:37 localhost podman[303036]: 2026-02-20 09:46:37.898631162 +0000 UTC m=+0.045536785 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:38 localhost podman[303036]: 2026-02-20 09:46:38.008856315 +0000 UTC m=+0.155761928 container init 0b64aa634b0d2a7d6433c13595bbee16a849f7178b6624bb5bf096b618bd2ca3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_bassi, RELEASE=main, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1770267347, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, maintainer=Guillaume Abrioux , architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True) Feb 20 04:46:38 localhost podman[303036]: 2026-02-20 09:46:38.020449317 +0000 UTC m=+0.167354880 container start 0b64aa634b0d2a7d6433c13595bbee16a849f7178b6624bb5bf096b618bd2ca3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_bassi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., release=1770267347, description=Red Hat Ceph Storage 7, name=rhceph) Feb 20 04:46:38 localhost podman[303036]: 2026-02-20 09:46:38.021642408 +0000 UTC m=+0.168548031 container attach 0b64aa634b0d2a7d6433c13595bbee16a849f7178b6624bb5bf096b618bd2ca3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_bassi, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, RELEASE=main, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., vcs-type=git, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:46:38 localhost busy_bassi[303051]: 167 167 Feb 20 04:46:38 localhost systemd[1]: libpod-0b64aa634b0d2a7d6433c13595bbee16a849f7178b6624bb5bf096b618bd2ca3.scope: Deactivated successfully. Feb 20 04:46:38 localhost podman[303036]: 2026-02-20 09:46:38.024716141 +0000 UTC m=+0.171621774 container died 0b64aa634b0d2a7d6433c13595bbee16a849f7178b6624bb5bf096b618bd2ca3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_bassi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.42.2, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, architecture=x86_64, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.openshift.tags=rhceph ceph) Feb 20 04:46:38 localhost podman[303057]: 2026-02-20 09:46:38.119305324 +0000 UTC m=+0.084606375 container remove 0b64aa634b0d2a7d6433c13595bbee16a849f7178b6624bb5bf096b618bd2ca3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_bassi, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, vcs-type=git, release=1770267347, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph) Feb 20 04:46:38 localhost systemd[1]: libpod-conmon-0b64aa634b0d2a7d6433c13595bbee16a849f7178b6624bb5bf096b618bd2ca3.scope: Deactivated successfully. Feb 20 04:46:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:46:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:38 localhost systemd[1]: var-lib-containers-storage-overlay-67037c53a03bc5f6ef09ce89c03d02c724d0e8c96c01c9159841a031f0090cbb-merged.mount: Deactivated successfully. Feb 20 04:46:38 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:38 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:38 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:46:38 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:38 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:38 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:46:38 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:38 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:46:38 localhost ceph-mon[292786]: Deploying daemon mon.np0005625204 on np0005625204.localdomain Feb 20 04:46:38 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:38 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:38 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:38 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:38 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:39 localhost sshd[303075]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:46:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:39 localhost ceph-mon[292786]: Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:46:39 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:46:39 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:39 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:39 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:46:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:40 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:40 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:40 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:40 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:40 localhost ceph-mon[292786]: Reconfiguring osd.1 (monmap changed)... Feb 20 04:46:40 localhost ceph-mon[292786]: Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:46:40 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:40 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:40 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:40 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:40 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:46:41 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:41 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:41 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:41 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:41 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:46:41 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:41 localhost ceph-mon[292786]: Reconfiguring osd.4 (monmap changed)... Feb 20 04:46:41 localhost ceph-mon[292786]: Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:46:41 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:41 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:41 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:41 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:46:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:42 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:42 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:46:42 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:42 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)... Feb 20 04:46:42 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain Feb 20 04:46:42 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:42 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:42 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:42 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:46:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:43 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:43 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:46:43 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:43 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)... Feb 20 04:46:43 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain Feb 20 04:46:43 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:43 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:43 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:43 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:44 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:44 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:44 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:44 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:46:44 localhost podman[303077]: 2026-02-20 09:46:44.452032993 +0000 UTC m=+0.086090165 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:46:44 localhost podman[303077]: 2026-02-20 09:46:44.463315657 +0000 UTC m=+0.097372779 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:46:44 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:46:44 localhost ceph-mon[292786]: Reconfiguring crash.np0005625204 (monmap changed)... Feb 20 04:46:44 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain Feb 20 04:46:44 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:44 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:46 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:46:46 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Feb 20 04:46:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Feb 20 04:46:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Feb 20 04:46:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Feb 20 04:46:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Feb 20 04:46:46 localhost podman[241347]: time="2026-02-20T09:46:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:46:46 localhost podman[241347]: @ - - [20/Feb/2026:09:46:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:46:46 localhost podman[241347]: @ - - [20/Feb/2026:09:46:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18746 "" "Go-http-client/1.1" Feb 20 04:46:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Feb 20 04:46:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader).monmap v16 adding/updating np0005625204 at [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to monitor cluster Feb 20 04:46:46 localhost ceph-mgr[286565]: ms_deliver_dispatch: unhandled message 0x55d2fffb51e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Feb 20 04:46:46 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : mon.np0005625202 calling monitor election Feb 20 04:46:46 localhost ceph-mon[292786]: paxos.0).electionLogic(64) init, last seen epoch 64 Feb 20 04:46:46 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:46:47 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:47 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:47 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:48 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:48 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:48 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:48 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:49 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:50 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:50 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:50 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:51 localhost nova_compute[280804]: 2026-02-20 09:46:51.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:46:51 localhost nova_compute[280804]: 2026-02-20 09:46:51.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:46:51 localhost nova_compute[280804]: 2026-02-20 09:46:51.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:46:51 localhost nova_compute[280804]: 2026-02-20 09:46:51.526 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:46:51 localhost nova_compute[280804]: 2026-02-20 09:46:51.526 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:46:51 localhost nova_compute[280804]: 2026-02-20 09:46:51.527 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:46:51 localhost nova_compute[280804]: 2026-02-20 09:46:51.544 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:46:51 localhost nova_compute[280804]: 2026-02-20 09:46:51.544 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:46:51 localhost nova_compute[280804]: 2026-02-20 09:46:51.545 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:46:51 localhost nova_compute[280804]: 2026-02-20 09:46:51.545 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:46:51 localhost nova_compute[280804]: 2026-02-20 09:46:51.545 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:46:51 localhost ceph-mon[292786]: paxos.0).electionLogic(65) init, last seen epoch 65, mid-election, bumping Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 handle_auth_request failed to assign global_id Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625202@0(electing) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : mon.np0005625202 is new leader, mons np0005625202,np0005625203,np0005625204 in quorum (ranks 0,1,2) Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : monmap epoch 17 Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : last_changed 2026-02-20T09:46:46.606881+0000 Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : created 2026-02-20T07:36:51.191305+0000 Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : election_strategy: 1 Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202 Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203 Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625204 Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e36: np0005625203.lonygy(active, since 95s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu, np0005625202.arwxwo Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : overall HEALTH_OK Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625202 calling monitor election Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625203 calling monitor election Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625204 calling monitor election Feb 20 04:46:51 localhost ceph-mon[292786]: mon.np0005625202 is new leader, mons np0005625202,np0005625203,np0005625204 in quorum (ranks 0,1,2) Feb 20 04:46:51 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:46:51 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e37: np0005625203.lonygy(active, since 95s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu, np0005625202.arwxwo Feb 20 04:46:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:46:52 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2223140922' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.029 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.211 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.213 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11979MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.213 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.213 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:46:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.327 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.327 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.351 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:46:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:46:52 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1604508930' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.788 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.796 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.811 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.814 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:46:52 localhost nova_compute[280804]: 2026-02-20 09:46:52.814 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:46:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.518413) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580813518454, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 764, "num_deletes": 251, "total_data_size": 1245498, "memory_usage": 1267216, "flush_reason": "Manual Compaction"} Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580813526353, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 1121553, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19538, "largest_seqno": 20301, "table_properties": {"data_size": 1117446, "index_size": 1770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10663, "raw_average_key_size": 21, "raw_value_size": 1108686, "raw_average_value_size": 2239, "num_data_blocks": 75, "num_entries": 495, "num_filter_entries": 495, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580797, "oldest_key_time": 1771580797, "file_creation_time": 1771580813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 8012 microseconds, and 3781 cpu microseconds. Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.526428) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 1121553 bytes OK Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.526451) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.528708) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.528728) EVENT_LOG_v1 {"time_micros": 1771580813528722, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.528750) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1241339, prev total WAL file size 1241339, number of live WAL files 2. Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:46:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.529310) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end) Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(1095KB)], [33(18MB)] Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580813529347, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 20298471, "oldest_snapshot_seqno": -1} Feb 20 04:46:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 11224 keys, 16369271 bytes, temperature: kUnknown Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580813598962, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 16369271, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16305594, "index_size": 34520, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28101, "raw_key_size": 302953, "raw_average_key_size": 26, "raw_value_size": 16114388, "raw_average_value_size": 1435, "num_data_blocks": 1304, "num_entries": 11224, "num_filter_entries": 11224, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771580813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.599362) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 16369271 bytes Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.601248) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 291.1 rd, 234.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 18.3 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(32.7) write-amplify(14.6) OK, records in: 11760, records dropped: 536 output_compression: NoCompression Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.601280) EVENT_LOG_v1 {"time_micros": 1771580813601264, "job": 18, "event": "compaction_finished", "compaction_time_micros": 69740, "compaction_time_cpu_micros": 36060, "output_level": 6, "num_output_files": 1, "total_output_size": 16369271, "num_input_records": 11760, "num_output_records": 11224, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580813601655, "job": 18, "event": "table_file_deletion", "file_number": 35} Feb 20 04:46:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580813604765, "job": 18, "event": "table_file_deletion", "file_number": 33} Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.529238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.604832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.604839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.604841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.604843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:46:53.604845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:53 localhost nova_compute[280804]: 2026-02-20 09:46:53.798 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:46:53 localhost nova_compute[280804]: 2026-02-20 09:46:53.799 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:46:53 localhost nova_compute[280804]: 2026-02-20 09:46:53.799 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:46:53 localhost nova_compute[280804]: 2026-02-20 09:46:53.800 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:46:53 localhost nova_compute[280804]: 2026-02-20 09:46:53.800 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:46:54 localhost nova_compute[280804]: 2026-02-20 09:46:54.512 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:46:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:46:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:46:54 localhost podman[303449]: 2026-02-20 09:46:54.734959353 +0000 UTC m=+0.088418547 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 20 04:46:54 localhost podman[303449]: 2026-02-20 09:46:54.745109406 +0000 UTC m=+0.098568560 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 20 04:46:54 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:46:54 localhost ceph-mon[292786]: Reconfig service osd.default_drive_group Feb 20 04:46:54 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:46:54 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:46:54 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:46:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:46:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:46:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:54 localhost podman[303448]: 2026-02-20 09:46:54.845500144 +0000 UTC m=+0.200759497 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, build-date=2026-02-05T04:57:10Z, release=1770267347, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:46:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:46:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:46:54 localhost podman[303448]: 2026-02-20 09:46:54.888921971 +0000 UTC m=+0.244181284 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.7, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.) Feb 20 04:46:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:46:54 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:46:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:46:55 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:55 localhost nova_compute[280804]: 2026-02-20 09:46:55.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:46:55 localhost podman[303595]: Feb 20 04:46:55 localhost podman[303595]: 2026-02-20 09:46:55.755903264 +0000 UTC m=+0.079085727 container create 16307e8a5a5d10eaa37f5fbf89d050ed5c5c2b060dcbb293867be18ba819c3d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_shockley, CEPH_POINT_RELEASE=, RELEASE=main, release=1770267347, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, version=7, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.buildah.version=1.42.2, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux ) Feb 20 04:46:55 localhost systemd[1]: Started libpod-conmon-16307e8a5a5d10eaa37f5fbf89d050ed5c5c2b060dcbb293867be18ba819c3d6.scope. Feb 20 04:46:55 localhost systemd[1]: Started libcrun container. Feb 20 04:46:55 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:55 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:55 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:46:55 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:55 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:55 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:55 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:55 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:55 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:55 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:55 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:55 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:46:55 localhost podman[303595]: 2026-02-20 09:46:55.723739599 +0000 UTC m=+0.046922062 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:55 localhost podman[303595]: 2026-02-20 09:46:55.826044109 +0000 UTC m=+0.149226572 container init 16307e8a5a5d10eaa37f5fbf89d050ed5c5c2b060dcbb293867be18ba819c3d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_shockley, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, distribution-scope=public, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1770267347, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_BRANCH=main, ceph=True, version=7, io.openshift.tags=rhceph ceph, vcs-type=git) Feb 20 04:46:55 localhost podman[303595]: 2026-02-20 09:46:55.83950533 +0000 UTC m=+0.162687793 container start 16307e8a5a5d10eaa37f5fbf89d050ed5c5c2b060dcbb293867be18ba819c3d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_shockley, version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, release=1770267347, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git) Feb 20 04:46:55 localhost podman[303595]: 2026-02-20 09:46:55.839987754 +0000 UTC m=+0.163170277 container attach 16307e8a5a5d10eaa37f5fbf89d050ed5c5c2b060dcbb293867be18ba819c3d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_shockley, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.buildah.version=1.42.2, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, distribution-scope=public) Feb 20 04:46:55 localhost nice_shockley[303610]: 167 167 Feb 20 04:46:55 localhost systemd[1]: libpod-16307e8a5a5d10eaa37f5fbf89d050ed5c5c2b060dcbb293867be18ba819c3d6.scope: Deactivated successfully. Feb 20 04:46:55 localhost podman[303595]: 2026-02-20 09:46:55.844599837 +0000 UTC m=+0.167782290 container died 16307e8a5a5d10eaa37f5fbf89d050ed5c5c2b060dcbb293867be18ba819c3d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_shockley, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, release=1770267347, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, distribution-scope=public) Feb 20 04:46:55 localhost podman[303615]: 2026-02-20 09:46:55.950428192 +0000 UTC m=+0.092217850 container remove 16307e8a5a5d10eaa37f5fbf89d050ed5c5c2b060dcbb293867be18ba819c3d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_shockley, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, build-date=2026-02-09T10:25:24Z, version=7, RELEASE=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.42.2, vcs-type=git, release=1770267347, GIT_CLEAN=True) Feb 20 04:46:55 localhost systemd[1]: libpod-conmon-16307e8a5a5d10eaa37f5fbf89d050ed5c5c2b060dcbb293867be18ba819c3d6.scope: Deactivated successfully. Feb 20 04:46:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:56 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:56 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:56 localhost podman[303686]: Feb 20 04:46:56 localhost podman[303686]: 2026-02-20 09:46:56.639607916 +0000 UTC m=+0.066804057 container create 625213652e7158d6c273709f60e0bb272fc9d5511b51dd599d735893f4aed4f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_bhaskara, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, build-date=2026-02-09T10:25:24Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, distribution-scope=public, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux ) Feb 20 04:46:56 localhost systemd[1]: Started libpod-conmon-625213652e7158d6c273709f60e0bb272fc9d5511b51dd599d735893f4aed4f6.scope. Feb 20 04:46:56 localhost systemd[1]: Started libcrun container. Feb 20 04:46:56 localhost podman[303686]: 2026-02-20 09:46:56.694917962 +0000 UTC m=+0.122114103 container init 625213652e7158d6c273709f60e0bb272fc9d5511b51dd599d735893f4aed4f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_bhaskara, version=7, GIT_CLEAN=True, release=1770267347, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux , name=rhceph, vendor=Red Hat, Inc., vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2) Feb 20 04:46:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:46:56 localhost podman[303686]: 2026-02-20 09:46:56.701480539 +0000 UTC m=+0.128676690 container start 625213652e7158d6c273709f60e0bb272fc9d5511b51dd599d735893f4aed4f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_bhaskara, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, release=1770267347, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, vcs-type=git, ceph=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 04:46:56 localhost podman[303686]: 2026-02-20 09:46:56.701712455 +0000 UTC m=+0.128908636 container attach 625213652e7158d6c273709f60e0bb272fc9d5511b51dd599d735893f4aed4f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_bhaskara, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, architecture=x86_64, RELEASE=main, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, name=rhceph) Feb 20 04:46:56 localhost trusting_bhaskara[303702]: 167 167 Feb 20 04:46:56 localhost systemd[1]: libpod-625213652e7158d6c273709f60e0bb272fc9d5511b51dd599d735893f4aed4f6.scope: Deactivated successfully. Feb 20 04:46:56 localhost podman[303686]: 2026-02-20 09:46:56.705418485 +0000 UTC m=+0.132614696 container died 625213652e7158d6c273709f60e0bb272fc9d5511b51dd599d735893f4aed4f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_bhaskara, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1770267347, vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Feb 20 04:46:56 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:56 localhost podman[303686]: 2026-02-20 09:46:56.615909109 +0000 UTC m=+0.043105250 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:56 localhost systemd[1]: var-lib-containers-storage-overlay-fea50678cf96d6d85cbae3cd2281d8770dad28c6c8c68a2616ec8efaa11c15a3-merged.mount: Deactivated successfully. Feb 20 04:46:56 localhost systemd[1]: tmp-crun.DRQDL2.mount: Deactivated successfully. Feb 20 04:46:56 localhost systemd[1]: var-lib-containers-storage-overlay-a096301d56af156e7ef90039acd3a9298da86877d7a76496fdb58286d131fa3d-merged.mount: Deactivated successfully. Feb 20 04:46:56 localhost podman[303707]: 2026-02-20 09:46:56.806144182 +0000 UTC m=+0.087642907 container remove 625213652e7158d6c273709f60e0bb272fc9d5511b51dd599d735893f4aed4f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_bhaskara, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, version=7, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1770267347, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, io.buildah.version=1.42.2, name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:46:56 localhost systemd[1]: libpod-conmon-625213652e7158d6c273709f60e0bb272fc9d5511b51dd599d735893f4aed4f6.scope: Deactivated successfully. Feb 20 04:46:56 localhost ceph-mon[292786]: Reconfiguring crash.np0005625202 (monmap changed)... Feb 20 04:46:56 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain Feb 20 04:46:56 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:56 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:56 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Feb 20 04:46:56 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:56 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:57 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:57 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:57 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:46:57 localhost podman[303784]: Feb 20 04:46:57 localhost podman[303784]: 2026-02-20 09:46:57.703661805 +0000 UTC m=+0.062753657 container create 1141d620cc880c29a4bec14ad7bc86f255ecc56bbae26d2ee73079e2c8a738e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_taussig, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, RELEASE=main, ceph=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64) Feb 20 04:46:57 localhost systemd[1]: Started libpod-conmon-1141d620cc880c29a4bec14ad7bc86f255ecc56bbae26d2ee73079e2c8a738e0.scope. Feb 20 04:46:57 localhost systemd[1]: Started libcrun container. Feb 20 04:46:57 localhost podman[303784]: 2026-02-20 09:46:57.765433555 +0000 UTC m=+0.124525437 container init 1141d620cc880c29a4bec14ad7bc86f255ecc56bbae26d2ee73079e2c8a738e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_taussig, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, name=rhceph, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , version=7, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.buildah.version=1.42.2) Feb 20 04:46:57 localhost systemd[1]: tmp-crun.wptAyk.mount: Deactivated successfully. Feb 20 04:46:57 localhost podman[303784]: 2026-02-20 09:46:57.677814401 +0000 UTC m=+0.036906323 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:57 localhost podman[303784]: 2026-02-20 09:46:57.778013403 +0000 UTC m=+0.137105275 container start 1141d620cc880c29a4bec14ad7bc86f255ecc56bbae26d2ee73079e2c8a738e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_taussig, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, distribution-scope=public, RELEASE=main, release=1770267347, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, version=7, ceph=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7) Feb 20 04:46:57 localhost podman[303784]: 2026-02-20 09:46:57.778407095 +0000 UTC m=+0.137498977 container attach 1141d620cc880c29a4bec14ad7bc86f255ecc56bbae26d2ee73079e2c8a738e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_taussig, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, version=7, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=1770267347, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:46:57 localhost elastic_taussig[303799]: 167 167 Feb 20 04:46:57 localhost systemd[1]: libpod-1141d620cc880c29a4bec14ad7bc86f255ecc56bbae26d2ee73079e2c8a738e0.scope: Deactivated successfully. Feb 20 04:46:57 localhost podman[303784]: 2026-02-20 09:46:57.781775195 +0000 UTC m=+0.140867117 container died 1141d620cc880c29a4bec14ad7bc86f255ecc56bbae26d2ee73079e2c8a738e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_taussig, RELEASE=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.component=rhceph-container, architecture=x86_64, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, vcs-type=git, release=1770267347, vendor=Red Hat, Inc., version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:46:57 localhost ceph-mon[292786]: Reconfiguring osd.2 (monmap changed)... Feb 20 04:46:57 localhost ceph-mon[292786]: Reconfiguring daemon osd.2 on np0005625202.localdomain Feb 20 04:46:57 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:57 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:57 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:57 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:57 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Feb 20 04:46:57 localhost systemd[1]: var-lib-containers-storage-overlay-dd5aa4e2e8cc78d1b77889788257e2f552e594bb0a4e4dd9416d0d8c9733a5e9-merged.mount: Deactivated successfully. Feb 20 04:46:57 localhost podman[303804]: 2026-02-20 09:46:57.875332389 +0000 UTC m=+0.084971535 container remove 1141d620cc880c29a4bec14ad7bc86f255ecc56bbae26d2ee73079e2c8a738e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_taussig, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, maintainer=Guillaume Abrioux , RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, build-date=2026-02-09T10:25:24Z, ceph=True) Feb 20 04:46:57 localhost systemd[1]: libpod-conmon-1141d620cc880c29a4bec14ad7bc86f255ecc56bbae26d2ee73079e2c8a738e0.scope: Deactivated successfully. Feb 20 04:46:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:58 localhost openstack_network_exporter[243776]: ERROR 09:46:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:46:58 localhost openstack_network_exporter[243776]: Feb 20 04:46:58 localhost openstack_network_exporter[243776]: ERROR 09:46:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:46:58 localhost openstack_network_exporter[243776]: Feb 20 04:46:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mgr fail"} v 0) Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/1025406798' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e89 do_prune osdmap full prune enabled Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : Activating manager daemon np0005625204.exgrzx Feb 20 04:46:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 e90: 6 total, 6 up, 6 in Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e90: 6 total, 6 up, 6 in Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/1025406798' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e38: np0005625204.exgrzx(active, starting, since 0.0312476s), standbys: np0005625201.mtnyvu, np0005625202.arwxwo Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : Manager daemon np0005625204.exgrzx is now available Feb 20 04:46:58 localhost systemd-logind[760]: Session 70 logged out. Waiting for processes to exit. Feb 20 04:46:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"} v 0) Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"}]': finished Feb 20 04:46:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"} v 0) Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"}]': finished Feb 20 04:46:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625204.exgrzx/mirror_snapshot_schedule"} v 0) Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625204.exgrzx/mirror_snapshot_schedule"} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625204.exgrzx/trash_purge_schedule"} v 0) Feb 20 04:46:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625204.exgrzx/trash_purge_schedule"} : dispatch Feb 20 04:46:58 localhost sshd[303876]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:46:58 localhost podman[303882]: Feb 20 04:46:58 localhost systemd-logind[760]: New session 71 of user ceph-admin. Feb 20 04:46:58 localhost podman[303882]: 2026-02-20 09:46:58.730333009 +0000 UTC m=+0.074046281 container create a1faa34a40c2774e5c813deb9340224c3feffa04effc9cb2222c67b8e36aecc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_jang, architecture=x86_64, distribution-scope=public, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7) Feb 20 04:46:58 localhost systemd[1]: Started Session 71 of User ceph-admin. Feb 20 04:46:58 localhost systemd[1]: Started libpod-conmon-a1faa34a40c2774e5c813deb9340224c3feffa04effc9cb2222c67b8e36aecc5.scope. Feb 20 04:46:58 localhost systemd[1]: Started libcrun container. Feb 20 04:46:58 localhost podman[303882]: 2026-02-20 09:46:58.790673112 +0000 UTC m=+0.134386384 container init a1faa34a40c2774e5c813deb9340224c3feffa04effc9cb2222c67b8e36aecc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_jang, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, ceph=True, io.buildah.version=1.42.2, GIT_CLEAN=True, vcs-type=git) Feb 20 04:46:58 localhost podman[303882]: 2026-02-20 09:46:58.702426859 +0000 UTC m=+0.046140181 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:46:58 localhost podman[303882]: 2026-02-20 09:46:58.80254387 +0000 UTC m=+0.146257132 container start a1faa34a40c2774e5c813deb9340224c3feffa04effc9cb2222c67b8e36aecc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_jang, distribution-scope=public, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.buildah.version=1.42.2, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, version=7, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:46:58 localhost podman[303882]: 2026-02-20 09:46:58.804254437 +0000 UTC m=+0.147967699 container attach a1faa34a40c2774e5c813deb9340224c3feffa04effc9cb2222c67b8e36aecc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_jang, io.openshift.expose-services=, RELEASE=main, name=rhceph, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , version=7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.buildah.version=1.42.2, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14) Feb 20 04:46:58 localhost adoring_jang[303898]: 167 167 Feb 20 04:46:58 localhost systemd[1]: libpod-a1faa34a40c2774e5c813deb9340224c3feffa04effc9cb2222c67b8e36aecc5.scope: Deactivated successfully. Feb 20 04:46:58 localhost podman[303882]: 2026-02-20 09:46:58.806457356 +0000 UTC m=+0.150170648 container died a1faa34a40c2774e5c813deb9340224c3feffa04effc9cb2222c67b8e36aecc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_jang, maintainer=Guillaume Abrioux , RELEASE=main, io.buildah.version=1.42.2, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, CEPH_POINT_RELEASE=) Feb 20 04:46:58 localhost ceph-mon[292786]: Reconfiguring osd.5 (monmap changed)... Feb 20 04:46:58 localhost ceph-mon[292786]: Reconfiguring daemon osd.5 on np0005625202.localdomain Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: from='client.? 172.18.0.200:0/1025406798' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: Activating manager daemon np0005625204.exgrzx Feb 20 04:46:58 localhost ceph-mon[292786]: from='client.? 172.18.0.200:0/1025406798' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 20 04:46:58 localhost ceph-mon[292786]: Manager daemon np0005625204.exgrzx is now available Feb 20 04:46:58 localhost ceph-mon[292786]: removing stray HostCache host record np0005625201.localdomain.devices.0 Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"}]': finished Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"}]': finished Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625204.exgrzx/mirror_snapshot_schedule"} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625204.exgrzx/mirror_snapshot_schedule"} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625204.exgrzx/trash_purge_schedule"} : dispatch Feb 20 04:46:58 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625204.exgrzx/trash_purge_schedule"} : dispatch Feb 20 04:46:58 localhost podman[303908]: 2026-02-20 09:46:58.918553039 +0000 UTC m=+0.100396060 container remove a1faa34a40c2774e5c813deb9340224c3feffa04effc9cb2222c67b8e36aecc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_jang, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True) Feb 20 04:46:58 localhost systemd[1]: libpod-conmon-a1faa34a40c2774e5c813deb9340224c3feffa04effc9cb2222c67b8e36aecc5.scope: Deactivated successfully. Feb 20 04:46:58 localhost systemd[1]: session-70.scope: Deactivated successfully. Feb 20 04:46:58 localhost systemd[1]: session-70.scope: Consumed 27.056s CPU time. Feb 20 04:46:58 localhost systemd-logind[760]: Removed session 70. Feb 20 04:46:59 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e39: np0005625204.exgrzx(active, since 1.04393s), standbys: np0005625201.mtnyvu, np0005625202.arwxwo Feb 20 04:46:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:46:59 localhost podman[303999]: 2026-02-20 09:46:59.648993241 +0000 UTC m=+0.092004664 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Feb 20 04:46:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:46:59 localhost podman[303999]: 2026-02-20 09:46:59.680719984 +0000 UTC m=+0.123731467 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:46:59 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:46:59 localhost systemd[1]: var-lib-containers-storage-overlay-97d83089caf119073eeed6647e7c3cb03e6357694cbd6ab728464f073f537e89-merged.mount: Deactivated successfully. Feb 20 04:46:59 localhost podman[304034]: 2026-02-20 09:46:59.78215182 +0000 UTC m=+0.093152544 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:46:59 localhost systemd[1]: tmp-crun.1kZUnd.mount: Deactivated successfully. Feb 20 04:46:59 localhost podman[304034]: 2026-02-20 09:46:59.895756563 +0000 UTC m=+0.206757267 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:46:59 localhost podman[304068]: 2026-02-20 09:46:59.896090523 +0000 UTC m=+0.093444933 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.42.2, io.openshift.expose-services=, vcs-type=git, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:46:59 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:47:00 localhost podman[304068]: 2026-02-20 09:47:00.022425629 +0000 UTC m=+0.219780069 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.expose-services=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7) Feb 20 04:47:00 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:47:00 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:00 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:47:00 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:00 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:47:00 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:00 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:47:00 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:00 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:47:00 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:00 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:47:00 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:01 localhost ceph-mon[292786]: [20/Feb/2026:09:46:59] ENGINE Bus STARTING Feb 20 04:47:01 localhost ceph-mon[292786]: [20/Feb/2026:09:46:59] ENGINE Serving on http://172.18.0.108:8765 Feb 20 04:47:01 localhost ceph-mon[292786]: [20/Feb/2026:09:46:59] ENGINE Serving on https://172.18.0.108:7150 Feb 20 04:47:01 localhost ceph-mon[292786]: [20/Feb/2026:09:46:59] ENGINE Bus STARTED Feb 20 04:47:01 localhost ceph-mon[292786]: [20/Feb/2026:09:46:59] ENGINE Client ('172.18.0.108', 51320) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 20 04:47:01 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:01 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:01 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:01 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:01 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:01 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:01 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e40: np0005625204.exgrzx(active, since 3s), standbys: np0005625201.mtnyvu, np0005625202.arwxwo Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3721573066' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3721573066' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:47:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:47:02 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:02 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:47:02 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:47:02 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:47:02 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:47:03 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : Standby manager daemon np0005625203.lonygy started Feb 20 04:47:04 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:47:04 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:47:04 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:47:04 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:47:04 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:47:04 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:47:04 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:47:04 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:47:04 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e41: np0005625204.exgrzx(active, since 6s), standbys: np0005625201.mtnyvu, np0005625202.arwxwo, np0005625203.lonygy Feb 20 04:47:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:47:04 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:47:04 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:47:04 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:47:04 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:47:05 localhost podman[304959]: 2026-02-20 09:47:05.444562633 +0000 UTC m=+0.078491771 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:47:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:47:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:47:05 localhost podman[304959]: 2026-02-20 09:47:05.484008113 +0000 UTC m=+0.117937301 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:47:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:47:05 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:47:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:05 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:05 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:05 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:47:05 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:05 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:05 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:05 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:05 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:47:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:47:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:47:05.914 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:47:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:47:05.914 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:47:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:47:05.914 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:47:06 localhost podman[305052]: Feb 20 04:47:06 localhost podman[305052]: 2026-02-20 09:47:06.351931661 +0000 UTC m=+0.076928788 container create 024f298676de07cdefa45de5456889c6b6586c9165ffb6042a91808513234814 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_wescoff, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, release=1770267347, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z) Feb 20 04:47:06 localhost systemd[1]: Started libpod-conmon-024f298676de07cdefa45de5456889c6b6586c9165ffb6042a91808513234814.scope. Feb 20 04:47:06 localhost systemd[1]: Started libcrun container. Feb 20 04:47:06 localhost podman[305052]: 2026-02-20 09:47:06.318830191 +0000 UTC m=+0.043827318 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:47:06 localhost podman[305052]: 2026-02-20 09:47:06.422496157 +0000 UTC m=+0.147493284 container init 024f298676de07cdefa45de5456889c6b6586c9165ffb6042a91808513234814 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_wescoff, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:47:06 localhost podman[305052]: 2026-02-20 09:47:06.432331962 +0000 UTC m=+0.157329099 container start 024f298676de07cdefa45de5456889c6b6586c9165ffb6042a91808513234814 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_wescoff, vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7) Feb 20 04:47:06 localhost podman[305052]: 2026-02-20 09:47:06.43264452 +0000 UTC m=+0.157641687 container attach 024f298676de07cdefa45de5456889c6b6586c9165ffb6042a91808513234814 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_wescoff, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Feb 20 04:47:06 localhost mystifying_wescoff[305068]: 167 167 Feb 20 04:47:06 localhost systemd[1]: libpod-024f298676de07cdefa45de5456889c6b6586c9165ffb6042a91808513234814.scope: Deactivated successfully. Feb 20 04:47:06 localhost podman[305052]: 2026-02-20 09:47:06.435444326 +0000 UTC m=+0.160441483 container died 024f298676de07cdefa45de5456889c6b6586c9165ffb6042a91808513234814 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_wescoff, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.tags=rhceph ceph) Feb 20 04:47:06 localhost ceph-mon[292786]: log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Feb 20 04:47:06 localhost ceph-mon[292786]: log_channel(cluster) log [WRN] : Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Feb 20 04:47:06 localhost systemd[1]: tmp-crun.8n10o9.mount: Deactivated successfully. Feb 20 04:47:06 localhost systemd[1]: var-lib-containers-storage-overlay-61ded1cf8f8ef2e0ff64d581a0b77e53c88857f563959e0886d3b85745d09808-merged.mount: Deactivated successfully. Feb 20 04:47:06 localhost podman[305073]: 2026-02-20 09:47:06.537120868 +0000 UTC m=+0.093064732 container remove 024f298676de07cdefa45de5456889c6b6586c9165ffb6042a91808513234814 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_wescoff, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.buildah.version=1.42.2, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Feb 20 04:47:06 localhost systemd[1]: libpod-conmon-024f298676de07cdefa45de5456889c6b6586c9165ffb6042a91808513234814.scope: Deactivated successfully. Feb 20 04:47:06 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:47:06 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:06 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:47:06 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:06 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:47:06 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:47:06 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)... Feb 20 04:47:06 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:47:06 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain Feb 20 04:47:06 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:47:06 localhost ceph-mon[292786]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Feb 20 04:47:06 localhost ceph-mon[292786]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Feb 20 04:47:06 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:06 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:47:06 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:06 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:47:07 localhost podman[305143]: Feb 20 04:47:07 localhost podman[305143]: 2026-02-20 09:47:07.274893238 +0000 UTC m=+0.076924769 container create ed890517dcf8a97735b279fe3f9b24a9fcbc0950ac60b7c7f29c11697b493c4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_sinoussi, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, ceph=True, architecture=x86_64, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, RELEASE=main) Feb 20 04:47:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:47:07 localhost systemd[1]: Started libpod-conmon-ed890517dcf8a97735b279fe3f9b24a9fcbc0950ac60b7c7f29c11697b493c4b.scope. Feb 20 04:47:07 localhost systemd[1]: Started libcrun container. Feb 20 04:47:07 localhost podman[305143]: 2026-02-20 09:47:07.340010758 +0000 UTC m=+0.142042289 container init ed890517dcf8a97735b279fe3f9b24a9fcbc0950ac60b7c7f29c11697b493c4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_sinoussi, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph) Feb 20 04:47:07 localhost podman[305143]: 2026-02-20 09:47:07.244947273 +0000 UTC m=+0.046978824 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:47:07 localhost podman[305143]: 2026-02-20 09:47:07.349541954 +0000 UTC m=+0.151573485 container start ed890517dcf8a97735b279fe3f9b24a9fcbc0950ac60b7c7f29c11697b493c4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_sinoussi, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, version=7, architecture=x86_64, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , distribution-scope=public, ceph=True) Feb 20 04:47:07 localhost podman[305143]: 2026-02-20 09:47:07.349860063 +0000 UTC m=+0.151891674 container attach ed890517dcf8a97735b279fe3f9b24a9fcbc0950ac60b7c7f29c11697b493c4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_sinoussi, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, release=1770267347, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, CEPH_POINT_RELEASE=, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public) Feb 20 04:47:07 localhost lucid_sinoussi[305158]: 167 167 Feb 20 04:47:07 localhost systemd[1]: libpod-ed890517dcf8a97735b279fe3f9b24a9fcbc0950ac60b7c7f29c11697b493c4b.scope: Deactivated successfully. Feb 20 04:47:07 localhost podman[305143]: 2026-02-20 09:47:07.354156249 +0000 UTC m=+0.156187820 container died ed890517dcf8a97735b279fe3f9b24a9fcbc0950ac60b7c7f29c11697b493c4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_sinoussi, RELEASE=main, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.buildah.version=1.42.2, ceph=True, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:47:07 localhost podman[305163]: 2026-02-20 09:47:07.440709065 +0000 UTC m=+0.077561276 container remove ed890517dcf8a97735b279fe3f9b24a9fcbc0950ac60b7c7f29c11697b493c4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_sinoussi, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, CEPH_POINT_RELEASE=, distribution-scope=public, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.buildah.version=1.42.2, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git) Feb 20 04:47:07 localhost systemd[1]: libpod-conmon-ed890517dcf8a97735b279fe3f9b24a9fcbc0950ac60b7c7f29c11697b493c4b.scope: Deactivated successfully. Feb 20 04:47:07 localhost systemd[1]: var-lib-containers-storage-overlay-9fe7a03f463ea15060d3e311038ad1116ece688e7e02c86d86a8aafea8d319c6-merged.mount: Deactivated successfully. Feb 20 04:47:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:47:07 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:47:07 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:47:07 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:47:08 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)... Feb 20 04:47:08 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain Feb 20 04:47:08 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:08 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:47:08 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:08 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:47:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:47:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:47:08 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:08 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:47:08 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:09 localhost ceph-mon[292786]: Reconfiguring crash.np0005625203 (monmap changed)... Feb 20 04:47:09 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain Feb 20 04:47:09 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:09 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:09 localhost ceph-mon[292786]: Reconfiguring osd.1 (monmap changed)... Feb 20 04:47:09 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Feb 20 04:47:09 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:09 localhost ceph-mon[292786]: Reconfiguring daemon osd.1 on np0005625203.localdomain Feb 20 04:47:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:47:09 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:47:09 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:47:09 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:47:09 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:09 localhost sshd[305179]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:47:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:47:10 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:10 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:10 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:10 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:10 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:10 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Feb 20 04:47:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:47:10 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:47:10 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:47:10 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:47:10 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:47:11 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:47:11 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:11 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:47:11 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:11 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:47:11 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:47:11 localhost ceph-mon[292786]: Reconfiguring osd.4 (monmap changed)... Feb 20 04:47:11 localhost ceph-mon[292786]: Reconfiguring daemon osd.4 on np0005625203.localdomain Feb 20 04:47:11 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:11 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:11 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:11 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:47:11 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:11 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:47:11 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:11 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:47:11 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:11 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:47:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:47:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:47:12 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:47:12 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Feb 20 04:47:12 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:47:12 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)... Feb 20 04:47:12 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain Feb 20 04:47:12 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)... Feb 20 04:47:12 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain Feb 20 04:47:12 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:12 localhost ceph-mon[292786]: Reconfiguring crash.np0005625204 (monmap changed)... Feb 20 04:47:12 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:47:12 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:12 localhost ceph-mon[292786]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain Feb 20 04:47:12 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Feb 20 04:47:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:47:13 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:47:13 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:47:14 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:47:14 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:14 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:14 localhost ceph-mon[292786]: Reconfiguring osd.0 (monmap changed)... Feb 20 04:47:14 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Feb 20 04:47:14 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:14 localhost ceph-mon[292786]: Reconfiguring daemon osd.0 on np0005625204.localdomain Feb 20 04:47:14 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:47:14 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:47:14 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:15 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:15 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:15 localhost ceph-mon[292786]: Reconfiguring osd.3 (monmap changed)... Feb 20 04:47:15 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Feb 20 04:47:15 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:15 localhost ceph-mon[292786]: Reconfiguring daemon osd.3 on np0005625204.localdomain Feb 20 04:47:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:47:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:47:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:47:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:47:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:47:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Feb 20 04:47:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:47:15 localhost podman[305181]: 2026-02-20 09:47:15.450308813 +0000 UTC m=+0.085161287 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:47:15 localhost podman[305181]: 2026-02-20 09:47:15.489917951 +0000 UTC m=+0.124770445 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:47:15 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:47:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 20 04:47:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:16 localhost podman[241347]: time="2026-02-20T09:47:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:47:16 localhost podman[241347]: @ - - [20/Feb/2026:09:47:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:47:16 localhost podman[241347]: @ - - [20/Feb/2026:09:47:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18756 "" "Go-http-client/1.1" Feb 20 04:47:16 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:16 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:16 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:16 localhost ceph-mon[292786]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)... Feb 20 04:47:16 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:47:16 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:16 localhost ceph-mon[292786]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain Feb 20 04:47:16 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Feb 20 04:47:16 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:16 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:47:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:16 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:47:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:16 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Feb 20 04:47:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:47:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:47:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:47:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:47:17 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:47:17 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:17 localhost ceph-mon[292786]: Saving service mon spec with placement label:mon Feb 20 04:47:17 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:17 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:17 localhost ceph-mon[292786]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)... Feb 20 04:47:17 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:47:17 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Feb 20 04:47:17 localhost ceph-mon[292786]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain Feb 20 04:47:17 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:18 localhost ceph-mon[292786]: Reconfiguring mon.np0005625204 (monmap changed)... Feb 20 04:47:18 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:18 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:47:18 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain Feb 20 04:47:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:47:18 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:47:18 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:47:18 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Feb 20 04:47:18 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:19 localhost podman[305273]: Feb 20 04:47:19 localhost podman[305273]: 2026-02-20 09:47:19.38814413 +0000 UTC m=+0.080361958 container create a133cd75da2a28f10190f0f96a0c3e4efefd9782123245d8c6da3c05fdafbf2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_archimedes, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, build-date=2026-02-09T10:25:24Z, release=1770267347, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:47:19 localhost systemd[1]: Started libpod-conmon-a133cd75da2a28f10190f0f96a0c3e4efefd9782123245d8c6da3c05fdafbf2a.scope. Feb 20 04:47:19 localhost systemd[1]: Started libcrun container. Feb 20 04:47:19 localhost podman[305273]: 2026-02-20 09:47:19.354548423 +0000 UTC m=+0.046766261 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Feb 20 04:47:19 localhost podman[305273]: 2026-02-20 09:47:19.456504156 +0000 UTC m=+0.148721984 container init a133cd75da2a28f10190f0f96a0c3e4efefd9782123245d8c6da3c05fdafbf2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_archimedes, maintainer=Guillaume Abrioux , GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, architecture=x86_64, release=1770267347, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Feb 20 04:47:19 localhost podman[305273]: 2026-02-20 09:47:19.466612027 +0000 UTC m=+0.158829865 container start a133cd75da2a28f10190f0f96a0c3e4efefd9782123245d8c6da3c05fdafbf2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_archimedes, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , architecture=x86_64) Feb 20 04:47:19 localhost podman[305273]: 2026-02-20 09:47:19.466882424 +0000 UTC m=+0.159100292 container attach a133cd75da2a28f10190f0f96a0c3e4efefd9782123245d8c6da3c05fdafbf2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_archimedes, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=1770267347, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 04:47:19 localhost trusting_archimedes[305287]: 167 167 Feb 20 04:47:19 localhost systemd[1]: libpod-a133cd75da2a28f10190f0f96a0c3e4efefd9782123245d8c6da3c05fdafbf2a.scope: Deactivated successfully. Feb 20 04:47:19 localhost podman[305273]: 2026-02-20 09:47:19.472439552 +0000 UTC m=+0.164657430 container died a133cd75da2a28f10190f0f96a0c3e4efefd9782123245d8c6da3c05fdafbf2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_archimedes, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z) Feb 20 04:47:19 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:19 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:47:19 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:19 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:19 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:19 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:47:19 localhost podman[305292]: 2026-02-20 09:47:19.566602628 +0000 UTC m=+0.081053537 container remove a133cd75da2a28f10190f0f96a0c3e4efefd9782123245d8c6da3c05fdafbf2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_archimedes, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1770267347, name=rhceph) Feb 20 04:47:19 localhost systemd[1]: libpod-conmon-a133cd75da2a28f10190f0f96a0c3e4efefd9782123245d8c6da3c05fdafbf2a.scope: Deactivated successfully. Feb 20 04:47:19 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:47:19 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:19 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:47:19 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:20 localhost systemd[1]: var-lib-containers-storage-overlay-a64c1e0b6f5f0e866b92f80b95e020802fcd4a7d6e726338620613ac7db8056a-merged.mount: Deactivated successfully. Feb 20 04:47:20 localhost ceph-mon[292786]: Reconfiguring mon.np0005625202 (monmap changed)... Feb 20 04:47:20 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain Feb 20 04:47:20 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:20 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Feb 20 04:47:20 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:20 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:47:20 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:20 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:47:20 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:21 localhost ceph-mon[292786]: Reconfiguring mon.np0005625203 (monmap changed)... Feb 20 04:47:21 localhost ceph-mon[292786]: Reconfiguring daemon mon.np0005625203 on np0005625203.localdomain Feb 20 04:47:21 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:21 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:47:22 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e42: np0005625204.exgrzx(active, since 24s), standbys: np0005625202.arwxwo, np0005625203.lonygy Feb 20 04:47:22 localhost sshd[305308]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:47:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:47:23 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:24 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:47:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:47:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:47:25 localhost podman[305310]: 2026-02-20 09:47:25.451449414 +0000 UTC m=+0.087118429 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 04:47:25 localhost podman[305310]: 2026-02-20 09:47:25.493818145 +0000 UTC m=+0.129487100 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, version=9.7, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc.) Feb 20 04:47:25 localhost podman[305311]: 2026-02-20 09:47:25.508828756 +0000 UTC m=+0.141706926 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Feb 20 04:47:25 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:47:25 localhost podman[305311]: 2026-02-20 09:47:25.522781179 +0000 UTC m=+0.155659279 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 20 04:47:25 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:47:26 localhost sshd[305349]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:47:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:47:28 localhost openstack_network_exporter[243776]: ERROR 09:47:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:47:28 localhost openstack_network_exporter[243776]: Feb 20 04:47:28 localhost openstack_network_exporter[243776]: ERROR 09:47:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:47:28 localhost openstack_network_exporter[243776]: Feb 20 04:47:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:47:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:47:30 localhost podman[305352]: 2026-02-20 09:47:30.445446618 +0000 UTC m=+0.080812760 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true) Feb 20 04:47:30 localhost podman[305352]: 2026-02-20 09:47:30.453812232 +0000 UTC m=+0.089178364 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible) Feb 20 04:47:30 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:47:30 localhost podman[305351]: 2026-02-20 09:47:30.542404618 +0000 UTC m=+0.180596785 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:47:30 localhost podman[305351]: 2026-02-20 09:47:30.604652441 +0000 UTC m=+0.242844588 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller) Feb 20 04:47:30 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:47:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:47:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:47:36 localhost podman[305395]: 2026-02-20 09:47:36.446707512 +0000 UTC m=+0.079843404 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:47:36 localhost podman[305395]: 2026-02-20 09:47:36.483751272 +0000 UTC m=+0.116887124 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:47:36 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:47:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:47:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:47:46 localhost podman[241347]: time="2026-02-20T09:47:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:47:46 localhost podman[241347]: @ - - [20/Feb/2026:09:47:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:47:46 localhost podman[241347]: @ - - [20/Feb/2026:09:47:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18757 "" "Go-http-client/1.1" Feb 20 04:47:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:47:46 localhost podman[305419]: 2026-02-20 09:47:46.439261873 +0000 UTC m=+0.079241438 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:47:46 localhost podman[305419]: 2026-02-20 09:47:46.447790701 +0000 UTC m=+0.087770256 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:47:46 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:47:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:47:51 localhost nova_compute[280804]: 2026-02-20 09:47:51.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:47:51 localhost nova_compute[280804]: 2026-02-20 09:47:51.537 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:47:51 localhost nova_compute[280804]: 2026-02-20 09:47:51.537 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:47:51 localhost nova_compute[280804]: 2026-02-20 09:47:51.538 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:47:51 localhost nova_compute[280804]: 2026-02-20 09:47:51.538 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:47:51 localhost nova_compute[280804]: 2026-02-20 09:47:51.538 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:47:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:47:51 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1447312241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:47:51 localhost nova_compute[280804]: 2026-02-20 09:47:51.975 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:47:52 localhost nova_compute[280804]: 2026-02-20 09:47:52.170 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:47:52 localhost nova_compute[280804]: 2026-02-20 09:47:52.172 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11966MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:47:52 localhost nova_compute[280804]: 2026-02-20 09:47:52.173 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:47:52 localhost nova_compute[280804]: 2026-02-20 09:47:52.174 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:47:52 localhost nova_compute[280804]: 2026-02-20 09:47:52.252 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:47:52 localhost nova_compute[280804]: 2026-02-20 09:47:52.253 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:47:52 localhost nova_compute[280804]: 2026-02-20 09:47:52.276 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:47:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:47:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:47:52 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3443497250' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:47:52 localhost nova_compute[280804]: 2026-02-20 09:47:52.772 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:47:52 localhost nova_compute[280804]: 2026-02-20 09:47:52.778 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:47:52 localhost nova_compute[280804]: 2026-02-20 09:47:52.799 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:47:52 localhost nova_compute[280804]: 2026-02-20 09:47:52.801 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:47:52 localhost nova_compute[280804]: 2026-02-20 09:47:52.801 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:47:53 localhost nova_compute[280804]: 2026-02-20 09:47:53.798 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:47:53 localhost nova_compute[280804]: 2026-02-20 09:47:53.814 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:47:53 localhost nova_compute[280804]: 2026-02-20 09:47:53.815 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:47:53 localhost nova_compute[280804]: 2026-02-20 09:47:53.815 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:47:53 localhost nova_compute[280804]: 2026-02-20 09:47:53.831 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:47:53 localhost nova_compute[280804]: 2026-02-20 09:47:53.831 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:47:53 localhost nova_compute[280804]: 2026-02-20 09:47:53.832 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:47:53 localhost nova_compute[280804]: 2026-02-20 09:47:53.832 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:47:54 localhost nova_compute[280804]: 2026-02-20 09:47:54.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:47:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Feb 20 04:47:55 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/3650643353' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Feb 20 04:47:55 localhost nova_compute[280804]: 2026-02-20 09:47:55.506 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:47:55 localhost nova_compute[280804]: 2026-02-20 09:47:55.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:47:55 localhost nova_compute[280804]: 2026-02-20 09:47:55.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:47:55 localhost sshd[305486]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:47:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:47:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:47:56 localhost podman[305488]: 2026-02-20 09:47:56.443331291 +0000 UTC m=+0.081796396 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, container_name=openstack_network_exporter, release=1770267347, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, name=ubi9/ubi-minimal) Feb 20 04:47:56 localhost podman[305488]: 2026-02-20 09:47:56.484985814 +0000 UTC m=+0.123450989 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 04:47:56 localhost systemd[1]: tmp-crun.i6AksS.mount: Deactivated successfully. Feb 20 04:47:56 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:47:56 localhost podman[305489]: 2026-02-20 09:47:56.507004473 +0000 UTC m=+0.141952794 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:47:56 localhost podman[305489]: 2026-02-20 09:47:56.542141731 +0000 UTC m=+0.177090102 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:47:56 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:47:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:47:57 localhost nova_compute[280804]: 2026-02-20 09:47:57.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:47:58 localhost openstack_network_exporter[243776]: ERROR 09:47:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:47:58 localhost openstack_network_exporter[243776]: Feb 20 04:47:58 localhost openstack_network_exporter[243776]: ERROR 09:47:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:47:58 localhost openstack_network_exporter[243776]: Feb 20 04:48:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:48:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:48:01 localhost podman[305525]: 2026-02-20 09:48:01.448433964 +0000 UTC m=+0.080366209 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS) Feb 20 04:48:01 localhost podman[305526]: 2026-02-20 09:48:01.512772302 +0000 UTC m=+0.141327637 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:48:01 localhost podman[305526]: 2026-02-20 09:48:01.549818782 +0000 UTC m=+0.178374077 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:48:01 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:48:01 localhost podman[305525]: 2026-02-20 09:48:01.568220573 +0000 UTC m=+0.200152858 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:48:01 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:48:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:48:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2727431459' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:48:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:48:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2727431459' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:48:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:48:04 localhost sshd[305568]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:48:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:48:05.914 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:48:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:48:05.915 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:48:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:48:05.915 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:48:06 localhost sshd[305570]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:48:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:48:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:48:07 localhost podman[305572]: 2026-02-20 09:48:07.450740207 +0000 UTC m=+0.087181040 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:48:07 localhost podman[305572]: 2026-02-20 09:48:07.48378697 +0000 UTC m=+0.120227833 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:48:07 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:48:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) Feb 20 04:48:07 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2546430745' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch Feb 20 04:48:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:48:16 localhost podman[241347]: time="2026-02-20T09:48:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:48:16 localhost podman[241347]: @ - - [20/Feb/2026:09:48:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:48:16 localhost podman[241347]: @ - - [20/Feb/2026:09:48:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18758 "" "Go-http-client/1.1" Feb 20 04:48:17 localhost sshd[305595]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:48:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:48:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:48:17 localhost podman[305597]: 2026-02-20 09:48:17.448421534 +0000 UTC m=+0.085046832 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:48:17 localhost podman[305597]: 2026-02-20 09:48:17.460859167 +0000 UTC m=+0.097484435 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:48:17 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:48:20 localhost sshd[305621]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:48:21 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:48:21 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:48:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:48:22 localhost ceph-mon[292786]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:48:22 localhost ceph-mon[292786]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mgr fail"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/2835510203' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e90 do_prune osdmap full prune enabled Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : Activating manager daemon np0005625202.arwxwo Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 e91: 6 total, 6 up, 6 in Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr handle_mgr_map Activating! Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr handle_mgr_map I am now activating Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e91: 6 total, 6 up, 6 in Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/2835510203' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e43: np0005625202.arwxwo(active, starting, since 0.0325637s), standbys: np0005625203.lonygy Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625202"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005625202.akhmop"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata", "who": "mds.np0005625202.akhmop"} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).mds e17 all = 0 Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005625204.wnsphl"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata", "who": "mds.np0005625204.wnsphl"} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).mds e17 all = 0 Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005625203.zsrwgk"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata", "who": "mds.np0005625203.zsrwgk"} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).mds e17 all = 0 Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 0} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 1} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 2} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 3} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 4} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 5} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mds metadata"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata"} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).mds e17 all = 1 Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd metadata"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata"} : dispatch Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon metadata"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata"} : dispatch Feb 20 04:48:23 localhost ceph-mgr[286565]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: balancer Feb 20 04:48:23 localhost ceph-mgr[286565]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : Manager daemon np0005625202.arwxwo is now available Feb 20 04:48:23 localhost ceph-mgr[286565]: [balancer INFO root] Starting Feb 20 04:48:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:48:23 Feb 20 04:48:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:48:23 localhost ceph-mgr[286565]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Feb 20 04:48:23 localhost systemd[1]: session-71.scope: Deactivated successfully. Feb 20 04:48:23 localhost systemd[1]: session-71.scope: Consumed 8.780s CPU time. Feb 20 04:48:23 localhost systemd-logind[760]: Session 71 logged out. Waiting for processes to exit. Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: cephadm Feb 20 04:48:23 localhost ceph-mgr[286565]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost systemd-logind[760]: Removed session 71. Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: crash Feb 20 04:48:23 localhost ceph-mgr[286565]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: devicehealth Feb 20 04:48:23 localhost ceph-mgr[286565]: [devicehealth INFO root] Starting Feb 20 04:48:23 localhost ceph-mgr[286565]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: iostat Feb 20 04:48:23 localhost ceph-mgr[286565]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: nfs Feb 20 04:48:23 localhost ceph-mgr[286565]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: orchestrator Feb 20 04:48:23 localhost ceph-mgr[286565]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: pg_autoscaler Feb 20 04:48:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:48:23 localhost ceph-mgr[286565]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: progress Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mgr[286565]: [progress INFO root] Loading... Feb 20 04:48:23 localhost ceph-mgr[286565]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Feb 20 04:48:23 localhost ceph-mgr[286565]: [progress INFO root] Loaded OSDMap, ready. Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] recovery thread starting Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] starting setup Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: rbd_support Feb 20 04:48:23 localhost ceph-mgr[286565]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: restful Feb 20 04:48:23 localhost ceph-mgr[286565]: [restful INFO root] server_addr: :: server_port: 8003 Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/mirror_snapshot_schedule"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/mirror_snapshot_schedule"} : dispatch Feb 20 04:48:23 localhost ceph-mgr[286565]: [restful WARNING root] server not running: no certificate configured Feb 20 04:48:23 localhost ceph-mgr[286565]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: status Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:48:23 localhost ceph-mgr[286565]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mon[292786]: from='client.? 172.18.0.200:0/2835510203' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: telemetry Feb 20 04:48:23 localhost ceph-mon[292786]: Activating manager daemon np0005625202.arwxwo Feb 20 04:48:23 localhost ceph-mon[292786]: from='client.? 172.18.0.200:0/2835510203' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Feb 20 04:48:23 localhost ceph-mon[292786]: Manager daemon np0005625202.arwxwo is now available Feb 20 04:48:23 localhost ceph-mgr[286565]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] PerfHandler: starting Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_task_task: vms, start_after= Feb 20 04:48:23 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:48:23 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:48:23 localhost ceph-mgr[286565]: mgr load Constructed class from module: volumes Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_task_task: volumes, start_after= Feb 20 04:48:23 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:48:23.488+0000 7f74594e2640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_task_task: images, start_after= Feb 20 04:48:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:48:23.488+0000 7f74594e2640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:48:23.488+0000 7f74594e2640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:48:23.488+0000 7f74594e2640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:48:23.488+0000 7f74594e2640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_task_task: backups, start_after= Feb 20 04:48:23 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:48:23.491+0000 7f7455cdb640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:48:23.491+0000 7f7455cdb640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:48:23.491+0000 7f7455cdb640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:48:23.491+0000 7f7455cdb640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:48:23.491+0000 7f7455cdb640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TaskHandler: starting Feb 20 04:48:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/trash_purge_schedule"} v 0) Feb 20 04:48:23 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/trash_purge_schedule"} : dispatch Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Feb 20 04:48:23 localhost ceph-mgr[286565]: [rbd_support INFO root] setup complete Feb 20 04:48:23 localhost sshd[305846]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:48:23 localhost systemd-logind[760]: New session 72 of user ceph-admin. Feb 20 04:48:23 localhost systemd[1]: Started Session 72 of User ceph-admin. Feb 20 04:48:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e44: np0005625202.arwxwo(active, since 1.04816s), standbys: np0005625203.lonygy Feb 20 04:48:24 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:24 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/mirror_snapshot_schedule"} : dispatch Feb 20 04:48:24 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/trash_purge_schedule"} : dispatch Feb 20 04:48:24 localhost podman[305959]: 2026-02-20 09:48:24.715029777 +0000 UTC m=+0.089312027 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, release=1770267347, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux ) Feb 20 04:48:24 localhost podman[305959]: 2026-02-20 09:48:24.824032819 +0000 UTC m=+0.198315069 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, distribution-scope=public) Feb 20 04:48:24 localhost ceph-mgr[286565]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:48:24] ENGINE Bus STARTING Feb 20 04:48:24 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:48:24] ENGINE Bus STARTING Feb 20 04:48:25 localhost ceph-mgr[286565]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:48:25] ENGINE Serving on http://172.18.0.106:8765 Feb 20 04:48:25 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:48:25] ENGINE Serving on http://172.18.0.106:8765 Feb 20 04:48:25 localhost ceph-mgr[286565]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:48:25] ENGINE Serving on https://172.18.0.106:7150 Feb 20 04:48:25 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:48:25] ENGINE Serving on https://172.18.0.106:7150 Feb 20 04:48:25 localhost ceph-mgr[286565]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:48:25] ENGINE Client ('172.18.0.106', 54518) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 20 04:48:25 localhost ceph-mgr[286565]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:48:25] ENGINE Bus STARTED Feb 20 04:48:25 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:48:25] ENGINE Client ('172.18.0.106', 54518) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 20 04:48:25 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:48:25] ENGINE Bus STARTED Feb 20 04:48:25 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 20 04:48:25 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 20 04:48:25 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : Cluster is now healthy Feb 20 04:48:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:48:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:48:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:48:26 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:48:26 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:26 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:26 localhost ceph-mgr[286565]: [devicehealth INFO root] Check health Feb 20 04:48:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:48:26 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:48:26 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:26 localhost ceph-mon[292786]: [20/Feb/2026:09:48:24] ENGINE Bus STARTING Feb 20 04:48:26 localhost ceph-mon[292786]: [20/Feb/2026:09:48:25] ENGINE Serving on http://172.18.0.106:8765 Feb 20 04:48:26 localhost ceph-mon[292786]: [20/Feb/2026:09:48:25] ENGINE Serving on https://172.18.0.106:7150 Feb 20 04:48:26 localhost ceph-mon[292786]: [20/Feb/2026:09:48:25] ENGINE Client ('172.18.0.106', 54518) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Feb 20 04:48:26 localhost ceph-mon[292786]: [20/Feb/2026:09:48:25] ENGINE Bus STARTED Feb 20 04:48:26 localhost ceph-mon[292786]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Feb 20 04:48:26 localhost ceph-mon[292786]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Feb 20 04:48:26 localhost ceph-mon[292786]: Cluster is now healthy Feb 20 04:48:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:48:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:48:27 localhost systemd[1]: tmp-crun.Cb3VW4.mount: Deactivated successfully. Feb 20 04:48:27 localhost podman[306199]: 2026-02-20 09:48:27.251055962 +0000 UTC m=+0.090714735 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, release=1770267347, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, vcs-type=git, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 04:48:27 localhost podman[306199]: 2026-02-20 09:48:27.292992202 +0000 UTC m=+0.132651055 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, release=1770267347, vendor=Red Hat, Inc., architecture=x86_64, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 04:48:27 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:48:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:27 localhost podman[306200]: 2026-02-20 09:48:27.298845648 +0000 UTC m=+0.138124221 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true) Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:48:27 localhost podman[306200]: 2026-02-20 09:48:27.378487417 +0000 UTC m=+0.217765920 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ceilometer_agent_compute) Feb 20 04:48:27 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:48:27 localhost ceph-mgr[286565]: [cephadm INFO root] Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:48:27 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:48:27 localhost ceph-mgr[286565]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:48:27 localhost ceph-mgr[286565]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:48:27 localhost ceph-mgr[286565]: [cephadm INFO root] Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:48:27 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:48:27 localhost ceph-mgr[286565]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:48:27 localhost ceph-mgr[286565]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:48:27 localhost ceph-mgr[286565]: [cephadm INFO root] Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:48:27 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:48:27 localhost ceph-mgr[286565]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:48:27 localhost ceph-mgr[286565]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:48:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:48:27 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:48:27 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:48:27 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:48:27 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:48:27 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:48:27 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:48:27 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:48:28 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:48:28 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:48:28 localhost openstack_network_exporter[243776]: ERROR 09:48:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:48:28 localhost openstack_network_exporter[243776]: Feb 20 04:48:28 localhost openstack_network_exporter[243776]: ERROR 09:48:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:48:28 localhost openstack_network_exporter[243776]: Feb 20 04:48:28 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:48:28 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:48:28 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:48:28 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:48:28 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:48:28 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:48:28 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:48:28 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:48:28 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:48:28 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:48:28 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:48:28 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf Feb 20 04:48:28 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf Feb 20 04:48:28 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf Feb 20 04:48:28 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : Standby manager daemon np0005625204.exgrzx started Feb 20 04:48:28 localhost ceph-mgr[286565]: mgr.server handle_open ignoring open from mgr.np0005625204.exgrzx 172.18.0.108:0/3524806190; not ready for session (expect reconnect) Feb 20 04:48:28 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e45: np0005625202.arwxwo(active, since 5s), standbys: np0005625203.lonygy, np0005625204.exgrzx Feb 20 04:48:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} v 0) Feb 20 04:48:28 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} : dispatch Feb 20 04:48:28 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:48:28 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:48:29 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:48:29 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:48:29 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:48:29 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:48:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:29 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:48:29 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:48:29 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:48:29 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:48:29 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf Feb 20 04:48:29 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:48:29 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:48:29 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:48:29 localhost ceph-mgr[286565]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:48:29 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:48:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:48:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:48:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:48:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:48:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:48:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:48:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:48:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 84d27699-fe31-486b-9376-f8c05b078986 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:48:30 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 84d27699-fe31-486b-9376-f8c05b078986 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:48:30 localhost ceph-mgr[286565]: [progress INFO root] Completed event 84d27699-fe31-486b-9376-f8c05b078986 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:48:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:48:30 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:48:30 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:48:30 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring Feb 20 04:48:30 localhost ceph-mon[292786]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:48:30 localhost ceph-mon[292786]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:48:30 localhost ceph-mon[292786]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring Feb 20 04:48:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:48:30 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:48:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:48:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:48:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:48:30 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:30 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev af52a30e-d158-40d8-86a7-3bd21a87647a (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:48:30 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev af52a30e-d158-40d8-86a7-3bd21a87647a (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:48:30 localhost ceph-mgr[286565]: [progress INFO root] Completed event af52a30e-d158-40d8-86a7-3bd21a87647a (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:48:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:48:30 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:48:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s Feb 20 04:48:31 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:48:31 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0. Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:31.930616) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37 Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580911930682, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2468, "num_deletes": 256, "total_data_size": 5781578, "memory_usage": 6133328, "flush_reason": "Manual Compaction"} Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580911953992, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 5220534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20302, "largest_seqno": 22769, "table_properties": {"data_size": 5209498, "index_size": 6901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27059, "raw_average_key_size": 22, "raw_value_size": 5185912, "raw_average_value_size": 4264, "num_data_blocks": 299, "num_entries": 1216, "num_filter_entries": 1216, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580813, "oldest_key_time": 1771580813, "file_creation_time": 1771580911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}} Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 23473 microseconds, and 12705 cpu microseconds. Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:31.954084) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 5220534 bytes OK Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:31.954115) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:31.956083) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:31.956109) EVENT_LOG_v1 {"time_micros": 1771580911956101, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:31.956135) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 5770322, prev total WAL file size 5770603, number of live WAL files 2. Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:31.957854) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end) Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(5098KB)], [36(15MB)] Feb 20 04:48:31 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580911957923, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 21589805, "oldest_snapshot_seqno": -1} Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 11900 keys, 18468430 bytes, temperature: kUnknown Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580912040805, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 18468430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18399698, "index_size": 37911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29765, "raw_key_size": 319025, "raw_average_key_size": 26, "raw_value_size": 18196165, "raw_average_value_size": 1529, "num_data_blocks": 1448, "num_entries": 11900, "num_filter_entries": 11900, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771580911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}} Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:32.041203) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 18468430 bytes Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:32.042988) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 260.0 rd, 222.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.0, 15.6 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 12440, records dropped: 540 output_compression: NoCompression Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:32.043019) EVENT_LOG_v1 {"time_micros": 1771580912043005, "job": 20, "event": "compaction_finished", "compaction_time_micros": 83042, "compaction_time_cpu_micros": 50854, "output_level": 6, "num_output_files": 1, "total_output_size": 18468430, "num_input_records": 12440, "num_output_records": 11900, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580912044053, "job": 20, "event": "table_file_deletion", "file_number": 38} Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580912046329, "job": 20, "event": "table_file_deletion", "file_number": 36} Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:31.957737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:32.046408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:32.046413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:32.046416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:32.046419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:48:32 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:48:32.046422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:48:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:48:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:48:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:48:32 localhost podman[306951]: 2026-02-20 09:48:32.457736799 +0000 UTC m=+0.085428343 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Feb 20 04:48:32 localhost podman[306952]: 2026-02-20 09:48:32.508146446 +0000 UTC m=+0.135221273 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible) Feb 20 04:48:32 localhost podman[306951]: 2026-02-20 09:48:32.518261857 +0000 UTC m=+0.145953381 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:48:32 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:48:32 localhost podman[306952]: 2026-02-20 09:48:32.537659304 +0000 UTC m=+0.164734081 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 20 04:48:32 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:48:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Feb 20 04:48:33 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:48:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:48:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:34 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:48:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Feb 20 04:48:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 20 04:48:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:48:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:48:38 localhost podman[306994]: 2026-02-20 09:48:38.455692416 +0000 UTC m=+0.093245282 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:48:38 localhost podman[306994]: 2026-02-20 09:48:38.489681394 +0000 UTC m=+0.127234240 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:48:38 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:48:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 20 04:48:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Feb 20 04:48:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:48:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:46 localhost podman[241347]: time="2026-02-20T09:48:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:48:46 localhost podman[241347]: @ - - [20/Feb/2026:09:48:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:48:46 localhost podman[241347]: @ - - [20/Feb/2026:09:48:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18761 "" "Go-http-client/1.1" Feb 20 04:48:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:48:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:48:48 localhost systemd[293679]: Created slice User Background Tasks Slice. Feb 20 04:48:48 localhost systemd[293679]: Starting Cleanup of User's Temporary Files and Directories... Feb 20 04:48:48 localhost podman[307017]: 2026-02-20 09:48:48.444227711 +0000 UTC m=+0.082647869 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:48:48 localhost systemd[293679]: Finished Cleanup of User's Temporary Files and Directories. Feb 20 04:48:48 localhost podman[307017]: 2026-02-20 09:48:48.48384698 +0000 UTC m=+0.122267138 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:48:48 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:48:49 localhost sshd[307041]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:48:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:48:52 localhost nova_compute[280804]: 2026-02-20 09:48:52.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:48:52 localhost nova_compute[280804]: 2026-02-20 09:48:52.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:48:52 localhost nova_compute[280804]: 2026-02-20 09:48:52.528 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:48:52 localhost nova_compute[280804]: 2026-02-20 09:48:52.529 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:48:52 localhost nova_compute[280804]: 2026-02-20 09:48:52.529 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:48:52 localhost nova_compute[280804]: 2026-02-20 09:48:52.529 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:48:52 localhost nova_compute[280804]: 2026-02-20 09:48:52.530 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:48:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:48:52 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3903298265' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:48:52 localhost nova_compute[280804]: 2026-02-20 09:48:52.954 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:48:53 localhost nova_compute[280804]: 2026-02-20 09:48:53.096 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:48:53 localhost nova_compute[280804]: 2026-02-20 09:48:53.097 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11938MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:48:53 localhost nova_compute[280804]: 2026-02-20 09:48:53.098 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:48:53 localhost nova_compute[280804]: 2026-02-20 09:48:53.098 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:48:53 localhost nova_compute[280804]: 2026-02-20 09:48:53.160 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:48:53 localhost nova_compute[280804]: 2026-02-20 09:48:53.160 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:48:53 localhost nova_compute[280804]: 2026-02-20 09:48:53.176 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:48:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:48:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:48:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:48:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:48:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:48:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:48:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:48:53 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1696424530' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:48:53 localhost nova_compute[280804]: 2026-02-20 09:48:53.602 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:48:53 localhost nova_compute[280804]: 2026-02-20 09:48:53.610 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:48:53 localhost nova_compute[280804]: 2026-02-20 09:48:53.626 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:48:53 localhost nova_compute[280804]: 2026-02-20 09:48:53.629 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:48:53 localhost nova_compute[280804]: 2026-02-20 09:48:53.629 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:48:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:55 localhost nova_compute[280804]: 2026-02-20 09:48:55.629 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:48:55 localhost nova_compute[280804]: 2026-02-20 09:48:55.630 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:48:55 localhost nova_compute[280804]: 2026-02-20 09:48:55.630 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:48:55 localhost nova_compute[280804]: 2026-02-20 09:48:55.651 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:48:55 localhost nova_compute[280804]: 2026-02-20 09:48:55.652 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:48:55 localhost nova_compute[280804]: 2026-02-20 09:48:55.653 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:48:55 localhost nova_compute[280804]: 2026-02-20 09:48:55.653 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:48:55 localhost nova_compute[280804]: 2026-02-20 09:48:55.653 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:48:56 localhost nova_compute[280804]: 2026-02-20 09:48:56.512 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:48:56 localhost nova_compute[280804]: 2026-02-20 09:48:56.513 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:48:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:48:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:48:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:48:57 localhost podman[307087]: 2026-02-20 09:48:57.459632224 +0000 UTC m=+0.100989359 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, release=1770267347, version=9.7, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc.) Feb 20 04:48:57 localhost podman[307087]: 2026-02-20 09:48:57.498309548 +0000 UTC m=+0.139666673 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, vcs-type=git, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, release=1770267347, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 20 04:48:57 localhost podman[307105]: 2026-02-20 09:48:57.53437667 +0000 UTC m=+0.069804035 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ceilometer_agent_compute) Feb 20 04:48:57 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:48:57 localhost podman[307105]: 2026-02-20 09:48:57.571898823 +0000 UTC m=+0.107326158 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:48:57 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:48:58 localhost openstack_network_exporter[243776]: ERROR 09:48:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:48:58 localhost openstack_network_exporter[243776]: Feb 20 04:48:58 localhost openstack_network_exporter[243776]: ERROR 09:48:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:48:58 localhost openstack_network_exporter[243776]: Feb 20 04:48:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:48:59 localhost nova_compute[280804]: 2026-02-20 09:48:59.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:49:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:49:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:49:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:49:03 localhost systemd[1]: tmp-crun.cdZJgk.mount: Deactivated successfully. Feb 20 04:49:03 localhost systemd[1]: tmp-crun.NMokyU.mount: Deactivated successfully. Feb 20 04:49:03 localhost podman[307127]: 2026-02-20 09:49:03.458926588 +0000 UTC m=+0.094501616 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:49:03 localhost podman[307126]: 2026-02-20 09:49:03.434193087 +0000 UTC m=+0.076412843 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller) Feb 20 04:49:03 localhost podman[307127]: 2026-02-20 09:49:03.48969262 +0000 UTC m=+0.125267658 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:49:03 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:49:03 localhost podman[307126]: 2026-02-20 09:49:03.513108566 +0000 UTC m=+0.155328312 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Feb 20 04:49:03 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:49:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:49:05.916 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:49:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:49:05.916 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:49:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:49:05.917 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:49:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:49:08 localhost sshd[307170]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:49:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:49:08 localhost podman[307172]: 2026-02-20 09:49:08.766564663 +0000 UTC m=+0.082558017 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:49:08 localhost podman[307172]: 2026-02-20 09:49:08.77922725 +0000 UTC m=+0.095220674 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:49:08 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:49:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:49:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:16 localhost podman[241347]: time="2026-02-20T09:49:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:49:16 localhost podman[241347]: @ - - [20/Feb/2026:09:49:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:49:16 localhost podman[241347]: @ - - [20/Feb/2026:09:49:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18759 "" "Go-http-client/1.1" Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.261 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.262 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.263 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:49:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:49:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:49:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:49:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:19 localhost podman[307195]: 2026-02-20 09:49:19.443479148 +0000 UTC m=+0.085711120 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:49:19 localhost podman[307195]: 2026-02-20 09:49:19.455796278 +0000 UTC m=+0.098028240 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:49:19 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:49:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:49:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:49:23 Feb 20 04:49:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:49:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 04:49:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['manila_metadata', 'manila_data', 'backups', '.mgr', 'volumes', 'vms', 'images'] Feb 20 04:49:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:49:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16) Feb 20 04:49:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:49:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:49:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:49:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:49:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:49:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:49:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:49:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:49:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:49:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:49:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:49:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:49:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:49:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:49:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:49:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:49:24 localhost sshd[307218]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:49:26 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:49:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:49:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:49:27 localhost systemd[1]: tmp-crun.fpquqn.mount: Deactivated successfully. Feb 20 04:49:27 localhost podman[307221]: 2026-02-20 09:49:27.997527535 +0000 UTC m=+0.081233564 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute) Feb 20 04:49:28 localhost podman[307221]: 2026-02-20 09:49:28.008808547 +0000 UTC m=+0.092514616 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 20 04:49:28 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:49:28 localhost systemd[1]: tmp-crun.Vyi7rA.mount: Deactivated successfully. Feb 20 04:49:28 localhost podman[307220]: 2026-02-20 09:49:28.101326862 +0000 UTC m=+0.189718747 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1770267347, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, distribution-scope=public, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z) Feb 20 04:49:28 localhost podman[307220]: 2026-02-20 09:49:28.116953029 +0000 UTC m=+0.205344954 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, version=9.7, architecture=x86_64, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, io.openshift.expose-services=, config_id=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container) Feb 20 04:49:28 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:49:28 localhost openstack_network_exporter[243776]: ERROR 09:49:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:49:28 localhost openstack_network_exporter[243776]: Feb 20 04:49:28 localhost openstack_network_exporter[243776]: ERROR 09:49:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:49:28 localhost openstack_network_exporter[243776]: Feb 20 04:49:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:49:32 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:49:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:49:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:49:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:49:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:49:32 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 6e5df31e-c721-4d2f-b04a-659d3f8f9628 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:49:32 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 6e5df31e-c721-4d2f-b04a-659d3f8f9628 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:49:32 localhost ceph-mgr[286565]: [progress INFO root] Completed event 6e5df31e-c721-4d2f-b04a-659d3f8f9628 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:49:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:49:32 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:49:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:49:32 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:49:32 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:49:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:33 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:49:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:49:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:49:34 localhost sshd[307344]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:49:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:49:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:49:34 localhost systemd[1]: tmp-crun.SbhZ1K.mount: Deactivated successfully. Feb 20 04:49:34 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:49:34 localhost podman[307346]: 2026-02-20 09:49:34.473206654 +0000 UTC m=+0.099128652 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:49:34 localhost podman[307346]: 2026-02-20 09:49:34.506532946 +0000 UTC m=+0.132454904 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:49:34 localhost podman[307345]: 2026-02-20 09:49:34.520346605 +0000 UTC m=+0.144369592 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:49:34 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:49:34 localhost podman[307345]: 2026-02-20 09:49:34.598819915 +0000 UTC m=+0.222842952 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 20 04:49:34 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:49:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:49:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:49:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:39 localhost systemd[1]: tmp-crun.Ok4mVg.mount: Deactivated successfully. Feb 20 04:49:39 localhost podman[307390]: 2026-02-20 09:49:39.447116963 +0000 UTC m=+0.082869008 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:49:39 localhost podman[307390]: 2026-02-20 09:49:39.484984125 +0000 UTC m=+0.120736170 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:49:39 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:49:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:49:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:46 localhost podman[241347]: time="2026-02-20T09:49:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:49:46 localhost podman[241347]: @ - - [20/Feb/2026:09:49:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:49:46 localhost podman[241347]: @ - - [20/Feb/2026:09:49:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18759 "" "Go-http-client/1.1" Feb 20 04:49:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:49:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:49:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5199 writes, 23K keys, 5199 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5199 writes, 708 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 88 writes, 307 keys, 88 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s#012Interval WAL: 88 writes, 36 syncs, 2.44 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 04:49:48 localhost sshd[307412]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:49:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:49:49 localhost podman[307414]: 2026-02-20 09:49:49.840645734 +0000 UTC m=+0.087665726 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:49:49 localhost podman[307414]: 2026-02-20 09:49:49.851294709 +0000 UTC m=+0.098314681 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:49:49 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:49:50 localhost sshd[307437]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:49:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:49:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5796 writes, 25K keys, 5796 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5796 writes, 879 syncs, 6.59 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 177 writes, 363 keys, 177 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s#012Interval WAL: 177 writes, 86 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 04:49:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:49:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:49:52.973 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:49:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:49:52.975 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 04:49:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail Feb 20 04:49:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:49:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:49:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:49:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Feb 20 04:49:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 20 04:49:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:49:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Feb 20 04:49:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 20 04:49:53 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e46: np0005625202.arwxwo(active, since 90s), standbys: np0005625203.lonygy, np0005625204.exgrzx Feb 20 04:49:54 localhost nova_compute[280804]: 2026-02-20 09:49:54.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:49:54 localhost nova_compute[280804]: 2026-02-20 09:49:54.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:49:54 localhost nova_compute[280804]: 2026-02-20 09:49:54.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:49:54 localhost nova_compute[280804]: 2026-02-20 09:49:54.548 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:49:54 localhost nova_compute[280804]: 2026-02-20 09:49:54.549 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:49:54 localhost nova_compute[280804]: 2026-02-20 09:49:54.549 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:49:54 localhost nova_compute[280804]: 2026-02-20 09:49:54.549 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:49:54 localhost nova_compute[280804]: 2026-02-20 09:49:54.550 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:49:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:49:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2042522384' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:49:54 localhost nova_compute[280804]: 2026-02-20 09:49:54.998 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:49:55 localhost nova_compute[280804]: 2026-02-20 09:49:55.216 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:49:55 localhost nova_compute[280804]: 2026-02-20 09:49:55.218 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11927MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:49:55 localhost nova_compute[280804]: 2026-02-20 09:49:55.218 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:49:55 localhost nova_compute[280804]: 2026-02-20 09:49:55.219 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:49:55 localhost nova_compute[280804]: 2026-02-20 09:49:55.323 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:49:55 localhost nova_compute[280804]: 2026-02-20 09:49:55.324 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:49:55 localhost nova_compute[280804]: 2026-02-20 09:49:55.360 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:49:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Feb 20 04:49:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:49:55 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4152313636' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:49:55 localhost nova_compute[280804]: 2026-02-20 09:49:55.821 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:49:55 localhost nova_compute[280804]: 2026-02-20 09:49:55.828 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:49:55 localhost nova_compute[280804]: 2026-02-20 09:49:55.852 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:49:55 localhost nova_compute[280804]: 2026-02-20 09:49:55.854 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:49:55 localhost nova_compute[280804]: 2026-02-20 09:49:55.855 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:49:56 localhost nova_compute[280804]: 2026-02-20 09:49:56.854 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:49:56 localhost nova_compute[280804]: 2026-02-20 09:49:56.918 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:49:56 localhost nova_compute[280804]: 2026-02-20 09:49:56.918 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:49:56 localhost nova_compute[280804]: 2026-02-20 09:49:56.919 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:49:56 localhost nova_compute[280804]: 2026-02-20 09:49:56.945 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:49:56 localhost nova_compute[280804]: 2026-02-20 09:49:56.946 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:49:56 localhost nova_compute[280804]: 2026-02-20 09:49:56.946 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:49:56 localhost nova_compute[280804]: 2026-02-20 09:49:56.946 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:49:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v50: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Feb 20 04:49:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:49:57 localhost nova_compute[280804]: 2026-02-20 09:49:57.598 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:49:58 localhost openstack_network_exporter[243776]: ERROR 09:49:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:49:58 localhost openstack_network_exporter[243776]: Feb 20 04:49:58 localhost openstack_network_exporter[243776]: ERROR 09:49:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:49:58 localhost openstack_network_exporter[243776]: Feb 20 04:49:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:49:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:49:58 localhost podman[307483]: 2026-02-20 09:49:58.45915539 +0000 UTC m=+0.090377490 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible) Feb 20 04:49:58 localhost podman[307483]: 2026-02-20 09:49:58.47373858 +0000 UTC m=+0.104960740 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, release=1770267347, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_id=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc.) Feb 20 04:49:58 localhost nova_compute[280804]: 2026-02-20 09:49:58.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:49:58 localhost podman[307484]: 2026-02-20 09:49:58.518654081 +0000 UTC m=+0.148103263 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 20 04:49:58 localhost podman[307484]: 2026-02-20 09:49:58.53503999 +0000 UTC m=+0.164489162 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:49:58 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:49:58 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:49:58 localhost ovn_metadata_agent[161761]: 2026-02-20 09:49:58.977 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:49:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v51: 177 pgs: 177 active+clean; 121 MiB data, 624 MiB used, 41 GiB / 42 GiB avail; 8.2 KiB/s rd, 1.3 MiB/s wr, 12 op/s Feb 20 04:49:59 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e91 do_prune osdmap full prune enabled Feb 20 04:49:59 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e92 e92: 6 total, 6 up, 6 in Feb 20 04:49:59 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e92: 6 total, 6 up, 6 in Feb 20 04:50:00 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : overall HEALTH_OK Feb 20 04:50:00 localhost nova_compute[280804]: 2026-02-20 09:50:00.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:50:00 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 04:50:00 localhost sshd[307525]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:50:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v53: 177 pgs: 177 active+clean; 125 MiB data, 632 MiB used, 41 GiB / 42 GiB avail; 9.9 KiB/s rd, 2.0 MiB/s wr, 14 op/s Feb 20 04:50:01 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e92 do_prune osdmap full prune enabled Feb 20 04:50:01 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 e93: 6 total, 6 up, 6 in Feb 20 04:50:01 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e93: 6 total, 6 up, 6 in Feb 20 04:50:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:50:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v55: 177 pgs: 177 active+clean; 125 MiB data, 632 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 18 op/s Feb 20 04:50:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:50:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:50:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v56: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s Feb 20 04:50:05 localhost podman[307527]: 2026-02-20 09:50:05.447779204 +0000 UTC m=+0.083607998 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 20 04:50:05 localhost podman[307528]: 2026-02-20 09:50:05.501072369 +0000 UTC m=+0.133872042 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:50:05 localhost podman[307527]: 2026-02-20 09:50:05.515549816 +0000 UTC m=+0.151378650 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:50:05 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:50:05 localhost podman[307528]: 2026-02-20 09:50:05.537845623 +0000 UTC m=+0.170645276 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent) Feb 20 04:50:05 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:50:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:50:05.916 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:50:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:50:05.917 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:50:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:50:05.917 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:50:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v57: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 3.1 MiB/s wr, 29 op/s Feb 20 04:50:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:50:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v58: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 24 op/s Feb 20 04:50:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:50:10 localhost systemd[1]: tmp-crun.lilI1u.mount: Deactivated successfully. Feb 20 04:50:10 localhost podman[307570]: 2026-02-20 09:50:10.436591809 +0000 UTC m=+0.074894395 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:50:10 localhost podman[307570]: 2026-02-20 09:50:10.472825698 +0000 UTC m=+0.111128284 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:50:10 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:50:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v59: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 23 op/s Feb 20 04:50:11 localhost sshd[307593]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:50:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:50:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v60: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 1.8 MiB/s wr, 20 op/s Feb 20 04:50:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 1.7 MiB/s wr, 19 op/s Feb 20 04:50:16 localhost podman[241347]: time="2026-02-20T09:50:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:50:16 localhost podman[241347]: @ - - [20/Feb/2026:09:50:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:50:16 localhost podman[241347]: @ - - [20/Feb/2026:09:50:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18757 "" "Go-http-client/1.1" Feb 20 04:50:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:50:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:20 localhost sshd[307595]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:50:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:50:20 localhost systemd[1]: tmp-crun.aTkxk2.mount: Deactivated successfully. Feb 20 04:50:20 localhost podman[307596]: 2026-02-20 09:50:20.448061806 +0000 UTC m=+0.086982208 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:50:20 localhost podman[307596]: 2026-02-20 09:50:20.457764416 +0000 UTC m=+0.096684807 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:50:20 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:50:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:50:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:50:23 Feb 20 04:50:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:50:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 04:50:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['images', 'vms', 'volumes', 'manila_metadata', '.mgr', 'manila_data', 'backups'] Feb 20 04:50:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 04:50:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:50:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:50:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Feb 20 04:50:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:50:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:50:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:50:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:50:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:50:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:50:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:50:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:50:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:50:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:50:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:50:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:50:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:50:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:50:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:26 localhost ovn_controller[155916]: 2026-02-20T09:50:26Z|00040|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory Feb 20 04:50:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:50:27 localhost nova_compute[280804]: 2026-02-20 09:50:27.664 280808 DEBUG oslo_concurrency.processutils [None req-ce8f7f17-22b4-49a9-9d51-7d8d25968619 db2b8b7703fb412cb340d24e060343b8 9f81aa88c8464be9a5dafd4fb785ee4c - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:50:27 localhost nova_compute[280804]: 2026-02-20 09:50:27.682 280808 DEBUG oslo_concurrency.processutils [None req-ce8f7f17-22b4-49a9-9d51-7d8d25968619 db2b8b7703fb412cb340d24e060343b8 9f81aa88c8464be9a5dafd4fb785ee4c - - default default] CMD "env LANG=C uptime" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:50:28 localhost openstack_network_exporter[243776]: ERROR 09:50:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:50:28 localhost openstack_network_exporter[243776]: Feb 20 04:50:28 localhost openstack_network_exporter[243776]: ERROR 09:50:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:50:28 localhost openstack_network_exporter[243776]: Feb 20 04:50:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:50:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:50:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:29 localhost podman[307622]: 2026-02-20 09:50:29.46131042 +0000 UTC m=+0.097031946 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, version=9.7, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 20 04:50:29 localhost podman[307623]: 2026-02-20 09:50:29.504680481 +0000 UTC m=+0.136107362 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Feb 20 04:50:29 localhost podman[307622]: 2026-02-20 09:50:29.527901462 +0000 UTC m=+0.163622978 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, vcs-type=git, version=9.7, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Feb 20 04:50:29 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:50:29 localhost podman[307623]: 2026-02-20 09:50:29.543852489 +0000 UTC m=+0.175279380 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Feb 20 04:50:29 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:50:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:50:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:50:33 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:50:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:50:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:50:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:50:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:50:33 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev e25be535-a22a-413a-864d-5a2293813f80 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:50:33 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev e25be535-a22a-413a-864d-5a2293813f80 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:50:33 localhost ceph-mgr[286565]: [progress INFO root] Completed event e25be535-a22a-413a-864d-5a2293813f80 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:50:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:50:33 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:50:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:33 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:50:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:50:33 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:50:34 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:50:34 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:50:34 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:50:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:50:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:50:36 localhost podman[307744]: 2026-02-20 09:50:36.452094134 +0000 UTC m=+0.087762980 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller) Feb 20 04:50:36 localhost podman[307744]: 2026-02-20 09:50:36.492235577 +0000 UTC m=+0.127904413 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 20 04:50:36 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:50:36 localhost podman[307745]: 2026-02-20 09:50:36.508222005 +0000 UTC m=+0.141388274 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 04:50:36 localhost podman[307745]: 2026-02-20 09:50:36.536264495 +0000 UTC m=+0.169430804 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:50:36 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:50:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:50:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:50:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:41 localhost podman[307785]: 2026-02-20 09:50:41.434756705 +0000 UTC m=+0.074657418 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:50:41 localhost podman[307785]: 2026-02-20 09:50:41.472878975 +0000 UTC m=+0.112779698 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:50:41 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:50:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:50:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:45 localhost nova_compute[280804]: 2026-02-20 09:50:45.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:50:45 localhost nova_compute[280804]: 2026-02-20 09:50:45.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 20 04:50:46 localhost podman[241347]: time="2026-02-20T09:50:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:50:46 localhost podman[241347]: @ - - [20/Feb/2026:09:50:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:50:46 localhost podman[241347]: @ - - [20/Feb/2026:09:50:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18756 "" "Go-http-client/1.1" Feb 20 04:50:46 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:50:46.384 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:50:46Z, description=, device_id=b5d0538d-0ab3-4d2a-a4dc-0d49c2ca7aa5, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0c16b2c2-05e5-4d22-a4cb-a494fa193633, ip_allocation=immediate, mac_address=fa:16:3e:11:2a:a4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=297, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:50:46Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:50:46 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:50:46 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:50:46 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:50:46 localhost podman[307824]: 2026-02-20 09:50:46.652599397 +0000 UTC m=+0.059453971 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:50:46 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:50:46.874 263745 INFO neutron.agent.dhcp.agent [None req-f54049a1-d16a-4a4b-be21-9e84561b6687 - - - - - -] DHCP configuration for ports {'0c16b2c2-05e5-4d22-a4cb-a494fa193633'} is completed#033[00m Feb 20 04:50:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:50:48 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:50:48.113 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:50:47Z, description=, device_id=38311c27-406d-4e99-b88f-f014ece8535b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=314ddedf-d4e5-425c-ae66-2f3369cb45aa, ip_allocation=immediate, mac_address=fa:16:3e:02:f9:6f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=314, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:50:47Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:50:48 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:50:48 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:50:48 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:50:48 localhost podman[307858]: 2026-02-20 09:50:48.355107152 +0000 UTC m=+0.049424134 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 20 04:50:48 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:50:48.524 263745 INFO neutron.agent.dhcp.agent [None req-7b534a3f-6cad-4b13-922f-d00a48faf1d1 - - - - - -] DHCP configuration for ports {'314ddedf-d4e5-425c-ae66-2f3369cb45aa'} is completed#033[00m Feb 20 04:50:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v78: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:50 localhost sshd[307880]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:50:51 localhost sshd[307881]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:50:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:50:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:51 localhost podman[307884]: 2026-02-20 09:50:51.445309359 +0000 UTC m=+0.083991568 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:50:51 localhost podman[307884]: 2026-02-20 09:50:51.483822139 +0000 UTC m=+0.122504318 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:50:51 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:50:51 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:50:51.514 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:50:51Z, description=, device_id=0c77eb17-66e6-4aa0-8b78-169b259339e9, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1e03ac6e-b4ad-4935-9986-6f7029a289e6, ip_allocation=immediate, mac_address=fa:16:3e:cf:03:e3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=349, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:50:51Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:50:51 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 4 addresses Feb 20 04:50:51 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:50:51 localhost podman[307922]: 2026-02-20 09:50:51.71600508 +0000 UTC m=+0.058101705 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:50:51 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:50:51 localhost systemd[1]: tmp-crun.Im7ijs.mount: Deactivated successfully. Feb 20 04:50:51 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:50:51.916 263745 INFO neutron.agent.dhcp.agent [None req-420e9e41-0523-4bc0-bcd3-7a461e59f788 - - - - - -] DHCP configuration for ports {'1e03ac6e-b4ad-4935-9986-6f7029a289e6'} is completed#033[00m Feb 20 04:50:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:50:52 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2038880473' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:50:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:50:53 localhost ovn_metadata_agent[161761]: 2026-02-20 09:50:53.152 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:50:53 localhost ovn_metadata_agent[161761]: 2026-02-20 09:50:53.154 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 04:50:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:50:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:50:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:50:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:50:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:50:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:50:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:55 localhost nova_compute[280804]: 2026-02-20 09:50:55.532 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:50:55 localhost nova_compute[280804]: 2026-02-20 09:50:55.533 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:50:55 localhost nova_compute[280804]: 2026-02-20 09:50:55.533 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:50:55 localhost nova_compute[280804]: 2026-02-20 09:50:55.557 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:50:55 localhost nova_compute[280804]: 2026-02-20 09:50:55.558 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.495 280808 DEBUG oslo_concurrency.lockutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "43720f70-168d-461a-8b52-ba71de6033a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.496 280808 DEBUG oslo_concurrency.lockutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.509 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.517 280808 DEBUG nova.compute.manager [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.547 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.547 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.548 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.548 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.548 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.774 280808 DEBUG oslo_concurrency.lockutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.775 280808 DEBUG oslo_concurrency.lockutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.781 280808 DEBUG nova.virt.hardware [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.781 280808 INFO nova.compute.claims [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Claim successful on node np0005625202.localdomain#033[00m Feb 20 04:50:56 localhost nova_compute[280804]: 2026-02-20 09:50:56.952 280808 DEBUG nova.scheduler.client.report [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Refreshing inventories for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 20 04:50:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:50:57 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2966131561' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.017 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.028 280808 DEBUG nova.scheduler.client.report [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Updating ProviderTree inventory for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.029 280808 DEBUG nova.compute.provider_tree [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Updating inventory in ProviderTree for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.069 280808 DEBUG nova.scheduler.client.report [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Refreshing aggregate associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.099 280808 DEBUG nova.scheduler.client.report [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Refreshing trait associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, traits: COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.154 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.258 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.260 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11911MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.261 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:50:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail Feb 20 04:50:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:50:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:50:57 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3661911017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.602 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.607 280808 DEBUG nova.compute.provider_tree [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.628 280808 DEBUG nova.scheduler.client.report [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.656 280808 DEBUG oslo_concurrency.lockutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.657 280808 DEBUG nova.compute.manager [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.659 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.726 280808 DEBUG nova.compute.manager [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.740 280808 INFO nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.743 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Instance 43720f70-168d-461a-8b52-ba71de6033a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.743 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.743 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.756 280808 DEBUG nova.compute.manager [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.796 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.883 280808 DEBUG nova.compute.manager [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.886 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.887 280808 INFO nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Creating image(s)#033[00m Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.943 280808 DEBUG nova.storage.rbd_utils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:50:57 localhost podman[308031]: 2026-02-20 09:50:57.989944175 +0000 UTC m=+0.041623025 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 20 04:50:57 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:50:57 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:50:57 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:50:57 localhost nova_compute[280804]: 2026-02-20 09:50:57.998 280808 DEBUG nova.storage.rbd_utils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:50:58 localhost nova_compute[280804]: 2026-02-20 09:50:58.026 280808 DEBUG nova.storage.rbd_utils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:50:58 localhost nova_compute[280804]: 2026-02-20 09:50:58.032 280808 DEBUG oslo_concurrency.lockutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "3692da63af034f7d594aac7c4b8eda10742f09b0" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:50:58 localhost nova_compute[280804]: 2026-02-20 09:50:58.033 280808 DEBUG oslo_concurrency.lockutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "3692da63af034f7d594aac7c4b8eda10742f09b0" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:50:58 localhost openstack_network_exporter[243776]: ERROR 09:50:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:50:58 localhost openstack_network_exporter[243776]: Feb 20 04:50:58 localhost openstack_network_exporter[243776]: ERROR 09:50:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:50:58 localhost openstack_network_exporter[243776]: Feb 20 04:50:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:50:58 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/306112953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:50:58 localhost nova_compute[280804]: 2026-02-20 09:50:58.245 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:50:58 localhost nova_compute[280804]: 2026-02-20 09:50:58.251 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:50:58 localhost nova_compute[280804]: 2026-02-20 09:50:58.293 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:50:58 localhost nova_compute[280804]: 2026-02-20 09:50:58.296 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:50:58 localhost nova_compute[280804]: 2026-02-20 09:50:58.297 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:50:58 localhost nova_compute[280804]: 2026-02-20 09:50:58.681 280808 DEBUG nova.virt.libvirt.imagebackend [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Image locations are: [{'url': 'rbd://a8557ee9-b55d-5519-942c-cf8f6172f1d8/images/06bd71fd-c415-45d9-b669-46209b7ca2f4/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a8557ee9-b55d-5519-942c-cf8f6172f1d8/images/06bd71fd-c415-45d9-b669-46209b7ca2f4/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Feb 20 04:50:58 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:50:58.773 263745 INFO neutron.agent.linux.ip_lib [None req-da3916ae-1115-4b61-b0a2-3d340d808a10 - - - - - -] Device tap51f0d734-55 cannot be used as it has no MAC address#033[00m Feb 20 04:50:58 localhost kernel: device tap51f0d734-55 entered promiscuous mode Feb 20 04:50:58 localhost NetworkManager[5967]: [1771581058.8053] manager: (tap51f0d734-55): new Generic device (/org/freedesktop/NetworkManager/Devices/15) Feb 20 04:50:58 localhost ovn_controller[155916]: 2026-02-20T09:50:58Z|00041|binding|INFO|Claiming lport 51f0d734-55af-400a-9d9e-de45acced278 for this chassis. Feb 20 04:50:58 localhost ovn_controller[155916]: 2026-02-20T09:50:58Z|00042|binding|INFO|51f0d734-55af-400a-9d9e-de45acced278: Claiming unknown Feb 20 04:50:58 localhost systemd-udevd[308112]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:50:58 localhost ovn_metadata_agent[161761]: 2026-02-20 09:50:58.819 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27062c1e0b7d442e92145af8caaac310', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7cfb952-aa76-40f1-97c6-9336f13048f7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=51f0d734-55af-400a-9d9e-de45acced278) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:50:58 localhost ovn_metadata_agent[161761]: 2026-02-20 09:50:58.820 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 51f0d734-55af-400a-9d9e-de45acced278 in datapath fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4 bound to our chassis#033[00m Feb 20 04:50:58 localhost ovn_metadata_agent[161761]: 2026-02-20 09:50:58.822 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:50:58 localhost ovn_metadata_agent[161761]: 2026-02-20 09:50:58.823 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b5709b-496f-405d-a54c-c59a5f1b3090]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:50:58 localhost journal[229367]: ethtool ioctl error on tap51f0d734-55: No such device Feb 20 04:50:58 localhost journal[229367]: ethtool ioctl error on tap51f0d734-55: No such device Feb 20 04:50:58 localhost ovn_controller[155916]: 2026-02-20T09:50:58Z|00043|binding|INFO|Setting lport 51f0d734-55af-400a-9d9e-de45acced278 ovn-installed in OVS Feb 20 04:50:58 localhost ovn_controller[155916]: 2026-02-20T09:50:58Z|00044|binding|INFO|Setting lport 51f0d734-55af-400a-9d9e-de45acced278 up in Southbound Feb 20 04:50:58 localhost journal[229367]: ethtool ioctl error on tap51f0d734-55: No such device Feb 20 04:50:58 localhost journal[229367]: ethtool ioctl error on tap51f0d734-55: No such device Feb 20 04:50:58 localhost journal[229367]: ethtool ioctl error on tap51f0d734-55: No such device Feb 20 04:50:58 localhost journal[229367]: ethtool ioctl error on tap51f0d734-55: No such device Feb 20 04:50:58 localhost journal[229367]: ethtool ioctl error on tap51f0d734-55: No such device Feb 20 04:50:58 localhost journal[229367]: ethtool ioctl error on tap51f0d734-55: No such device Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.293 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.294 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.295 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:50:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v83: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.534 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.536 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.550 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.551 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.585 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.658 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.part --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.660 280808 DEBUG nova.virt.images [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] 06bd71fd-c415-45d9-b669-46209b7ca2f4 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.662 280808 DEBUG nova.privsep.utils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.662 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.part /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:50:59 localhost podman[308189]: Feb 20 04:50:59 localhost podman[308189]: 2026-02-20 09:50:59.796458302 +0000 UTC m=+0.105227197 container create 4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 20 04:50:59 localhost podman[308189]: 2026-02-20 09:50:59.728276438 +0000 UTC m=+0.037045363 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:50:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:50:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:50:59 localhost systemd[1]: Started libpod-conmon-4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd.scope. Feb 20 04:50:59 localhost systemd[1]: Started libcrun container. Feb 20 04:50:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa41850f94ac33bd3503c44890f7c58c23d31663bce144580d72343fc7fc57d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:50:59 localhost podman[308189]: 2026-02-20 09:50:59.882426922 +0000 UTC m=+0.191195847 container init 4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.879 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.part /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.converted" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.884 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:50:59 localhost systemd[1]: tmp-crun.F9DzyK.mount: Deactivated successfully. Feb 20 04:50:59 localhost dnsmasq[308235]: started, version 2.85 cachesize 150 Feb 20 04:50:59 localhost dnsmasq[308235]: DNS service limited to local subnets Feb 20 04:50:59 localhost dnsmasq[308235]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:50:59 localhost dnsmasq[308235]: warning: no upstream servers configured Feb 20 04:50:59 localhost dnsmasq-dhcp[308235]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:50:59 localhost dnsmasq[308235]: read /var/lib/neutron/dhcp/fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4/addn_hosts - 0 addresses Feb 20 04:50:59 localhost dnsmasq-dhcp[308235]: read /var/lib/neutron/dhcp/fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4/host Feb 20 04:50:59 localhost dnsmasq-dhcp[308235]: read /var/lib/neutron/dhcp/fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4/opts Feb 20 04:50:59 localhost podman[308211]: 2026-02-20 09:50:59.920294954 +0000 UTC m=+0.074441152 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute) Feb 20 04:50:59 localhost podman[308189]: 2026-02-20 09:50:59.947448631 +0000 UTC m=+0.256217496 container start 4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.959 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.converted --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.960 280808 DEBUG oslo_concurrency.lockutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "3692da63af034f7d594aac7c4b8eda10742f09b0" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 1.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:50:59 localhost podman[308210]: 2026-02-20 09:50:59.987274696 +0000 UTC m=+0.140645273 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=) Feb 20 04:50:59 localhost nova_compute[280804]: 2026-02-20 09:50:59.998 280808 DEBUG nova.storage.rbd_utils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:00 localhost podman[308210]: 2026-02-20 09:51:00.002645898 +0000 UTC m=+0.156016485 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, vendor=Red Hat, Inc., version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, vcs-type=git, container_name=openstack_network_exporter) Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.003 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0 43720f70-168d-461a-8b52-ba71de6033a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:00 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:51:00 localhost podman[308211]: 2026-02-20 09:51:00.053335254 +0000 UTC m=+0.207481442 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:51:00 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:51:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:00.117 263745 INFO neutron.agent.dhcp.agent [None req-7adf69a4-159b-4a7c-b94b-52e474c9a5b1 - - - - - -] DHCP configuration for ports {'005c0353-bbd7-4e33-a120-36e062c5ba6d'} is completed#033[00m Feb 20 04:51:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:00.155 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:00.495 263745 INFO neutron.agent.linux.ip_lib [None req-377a7e0b-afe4-499a-b415-9583ae0307e2 - - - - - -] Device tap9b59fcac-19 cannot be used as it has no MAC address#033[00m Feb 20 04:51:00 localhost kernel: device tap9b59fcac-19 entered promiscuous mode Feb 20 04:51:00 localhost ovn_controller[155916]: 2026-02-20T09:51:00Z|00045|binding|INFO|Claiming lport 9b59fcac-1972-4a35-9126-1aeb8964e5f2 for this chassis. Feb 20 04:51:00 localhost ovn_controller[155916]: 2026-02-20T09:51:00Z|00046|binding|INFO|9b59fcac-1972-4a35-9126-1aeb8964e5f2: Claiming unknown Feb 20 04:51:00 localhost NetworkManager[5967]: [1771581060.5273] manager: (tap9b59fcac-19): new Generic device (/org/freedesktop/NetworkManager/Devices/16) Feb 20 04:51:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:00.533 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-2d38d28f-6e3b-40d7-8d0c-e95c89b81845', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d38d28f-6e3b-40d7-8d0c-e95c89b81845', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5605ba7cb0df4223b48ebf8a1894cdf1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09397b48-122f-4b17-963b-b7ec7e0b12d2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9b59fcac-1972-4a35-9126-1aeb8964e5f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:00.535 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 9b59fcac-1972-4a35-9126-1aeb8964e5f2 in datapath 2d38d28f-6e3b-40d7-8d0c-e95c89b81845 bound to our chassis#033[00m Feb 20 04:51:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:00.537 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2d38d28f-6e3b-40d7-8d0c-e95c89b81845 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:51:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:00.538 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[10976d71-f33f-49cc-bbf1-abbe389ca67b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:00 localhost journal[229367]: ethtool ioctl error on tap9b59fcac-19: No such device Feb 20 04:51:00 localhost journal[229367]: ethtool ioctl error on tap9b59fcac-19: No such device Feb 20 04:51:00 localhost ovn_controller[155916]: 2026-02-20T09:51:00Z|00047|binding|INFO|Setting lport 9b59fcac-1972-4a35-9126-1aeb8964e5f2 ovn-installed in OVS Feb 20 04:51:00 localhost ovn_controller[155916]: 2026-02-20T09:51:00Z|00048|binding|INFO|Setting lport 9b59fcac-1972-4a35-9126-1aeb8964e5f2 up in Southbound Feb 20 04:51:00 localhost journal[229367]: ethtool ioctl error on tap9b59fcac-19: No such device Feb 20 04:51:00 localhost journal[229367]: ethtool ioctl error on tap9b59fcac-19: No such device Feb 20 04:51:00 localhost journal[229367]: ethtool ioctl error on tap9b59fcac-19: No such device Feb 20 04:51:00 localhost journal[229367]: ethtool ioctl error on tap9b59fcac-19: No such device Feb 20 04:51:00 localhost journal[229367]: ethtool ioctl error on tap9b59fcac-19: No such device Feb 20 04:51:00 localhost journal[229367]: ethtool ioctl error on tap9b59fcac-19: No such device Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.573 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0 43720f70-168d-461a-8b52-ba71de6033a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:00.607 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:00Z, description=, device_id=b20d7241-b6a4-465d-a2a5-bcf872718400, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=aabe60f0-9166-485a-814f-993892de1c69, ip_allocation=immediate, mac_address=fa:16:3e:af:56:91, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=437, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:51:00Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.611 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.652 280808 DEBUG nova.storage.rbd_utils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] resizing rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.772 280808 DEBUG nova.objects.instance [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lazy-loading 'migration_context' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.787 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.788 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Ensure instance console log exists: /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.788 280808 DEBUG oslo_concurrency.lockutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.789 280808 DEBUG oslo_concurrency.lockutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.789 280808 DEBUG oslo_concurrency.lockutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.791 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-20T09:49:57Z,direct_url=,disk_format='qcow2',id=06bd71fd-c415-45d9-b669-46209b7ca2f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='91bce661d685472eb3e7cacab17bf52a',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2026-02-20T09:49:59Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '06bd71fd-c415-45d9-b669-46209b7ca2f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.796 280808 WARNING nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.800 280808 DEBUG nova.virt.libvirt.host [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Searching host: 'np0005625202.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.801 280808 DEBUG nova.virt.libvirt.host [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.802 280808 DEBUG nova.virt.libvirt.host [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Searching host: 'np0005625202.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.802 280808 DEBUG nova.virt.libvirt.host [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.803 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.803 280808 DEBUG nova.virt.hardware [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-20T09:49:56Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='40a6f41a-8891-4900-942e-688a656af142',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-20T09:49:57Z,direct_url=,disk_format='qcow2',id=06bd71fd-c415-45d9-b669-46209b7ca2f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='91bce661d685472eb3e7cacab17bf52a',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2026-02-20T09:49:59Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.804 280808 DEBUG nova.virt.hardware [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.804 280808 DEBUG nova.virt.hardware [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.804 280808 DEBUG nova.virt.hardware [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.804 280808 DEBUG nova.virt.hardware [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.804 280808 DEBUG nova.virt.hardware [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.805 280808 DEBUG nova.virt.hardware [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.805 280808 DEBUG nova.virt.hardware [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.805 280808 DEBUG nova.virt.hardware [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.805 280808 DEBUG nova.virt.hardware [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.806 280808 DEBUG nova.virt.hardware [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.809 280808 DEBUG nova.privsep.utils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Feb 20 04:51:00 localhost nova_compute[280804]: 2026-02-20 09:51:00.809 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:00 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 4 addresses Feb 20 04:51:00 localhost systemd[1]: tmp-crun.yV9qaR.mount: Deactivated successfully. Feb 20 04:51:00 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:51:00 localhost podman[308424]: 2026-02-20 09:51:00.838650191 +0000 UTC m=+0.040507935 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:51:00 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:51:01 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:01.085 263745 INFO neutron.agent.dhcp.agent [None req-eef963d7-0f98-4631-9ad1-e1cc77e77fe3 - - - - - -] DHCP configuration for ports {'aabe60f0-9166-485a-814f-993892de1c69'} is completed#033[00m Feb 20 04:51:01 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 04:51:01 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4020676950' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 04:51:01 localhost nova_compute[280804]: 2026-02-20 09:51:01.199 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:01 localhost nova_compute[280804]: 2026-02-20 09:51:01.233 280808 DEBUG nova.storage.rbd_utils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:01 localhost nova_compute[280804]: 2026-02-20 09:51:01.238 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:01 localhost podman[308519]: Feb 20 04:51:01 localhost podman[308519]: 2026-02-20 09:51:01.362565116 +0000 UTC m=+0.091018406 container create c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d38d28f-6e3b-40d7-8d0c-e95c89b81845, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 20 04:51:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s Feb 20 04:51:01 localhost systemd[1]: Started libpod-conmon-c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab.scope. Feb 20 04:51:01 localhost systemd[1]: Started libcrun container. Feb 20 04:51:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e079002679090358ee2b7d0036a683ba73bf16dd2abaa467d159d20555eedd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:51:01 localhost podman[308519]: 2026-02-20 09:51:01.320020919 +0000 UTC m=+0.048474309 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:51:01 localhost podman[308519]: 2026-02-20 09:51:01.42812128 +0000 UTC m=+0.156574610 container init c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d38d28f-6e3b-40d7-8d0c-e95c89b81845, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 20 04:51:01 localhost podman[308519]: 2026-02-20 09:51:01.436837673 +0000 UTC m=+0.165290993 container start c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d38d28f-6e3b-40d7-8d0c-e95c89b81845, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 20 04:51:01 localhost dnsmasq[308558]: started, version 2.85 cachesize 150 Feb 20 04:51:01 localhost dnsmasq[308558]: DNS service limited to local subnets Feb 20 04:51:01 localhost dnsmasq[308558]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:51:01 localhost dnsmasq[308558]: warning: no upstream servers configured Feb 20 04:51:01 localhost dnsmasq-dhcp[308558]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:51:01 localhost dnsmasq[308558]: read /var/lib/neutron/dhcp/2d38d28f-6e3b-40d7-8d0c-e95c89b81845/addn_hosts - 0 addresses Feb 20 04:51:01 localhost dnsmasq-dhcp[308558]: read /var/lib/neutron/dhcp/2d38d28f-6e3b-40d7-8d0c-e95c89b81845/host Feb 20 04:51:01 localhost dnsmasq-dhcp[308558]: read /var/lib/neutron/dhcp/2d38d28f-6e3b-40d7-8d0c-e95c89b81845/opts Feb 20 04:51:01 localhost nova_compute[280804]: 2026-02-20 09:51:01.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:51:01 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:01.632 263745 INFO neutron.agent.dhcp.agent [None req-c4e25afc-f12d-45dc-bf67-0af89bf94d0e - - - - - -] DHCP configuration for ports {'e1f29399-9d5b-462d-ade8-d209e83a9a55'} is completed#033[00m Feb 20 04:51:01 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 04:51:01 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1455703923' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 04:51:01 localhost nova_compute[280804]: 2026-02-20 09:51:01.748 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:01 localhost nova_compute[280804]: 2026-02-20 09:51:01.753 280808 DEBUG nova.objects.instance [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:01 localhost nova_compute[280804]: 2026-02-20 09:51:01.777 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] End _get_guest_xml xml= Feb 20 04:51:01 localhost nova_compute[280804]: 43720f70-168d-461a-8b52-ba71de6033a0 Feb 20 04:51:01 localhost nova_compute[280804]: instance-00000006 Feb 20 04:51:01 localhost nova_compute[280804]: 131072 Feb 20 04:51:01 localhost nova_compute[280804]: 1 Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: tempest-UnshelveToHostMultiNodesTest-server-1846377785 Feb 20 04:51:01 localhost nova_compute[280804]: 2026-02-20 09:51:00 Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: 128 Feb 20 04:51:01 localhost nova_compute[280804]: 1 Feb 20 04:51:01 localhost nova_compute[280804]: 0 Feb 20 04:51:01 localhost nova_compute[280804]: 0 Feb 20 04:51:01 localhost nova_compute[280804]: 1 Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: tempest-UnshelveToHostMultiNodesTest-1217794180-project-member Feb 20 04:51:01 localhost nova_compute[280804]: tempest-UnshelveToHostMultiNodesTest-1217794180 Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: RDO Feb 20 04:51:01 localhost nova_compute[280804]: OpenStack Compute Feb 20 04:51:01 localhost nova_compute[280804]: 27.5.2-0.20260127144738.eaa65f0.el9 Feb 20 04:51:01 localhost nova_compute[280804]: 43720f70-168d-461a-8b52-ba71de6033a0 Feb 20 04:51:01 localhost nova_compute[280804]: 43720f70-168d-461a-8b52-ba71de6033a0 Feb 20 04:51:01 localhost nova_compute[280804]: Virtual Machine Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: hvm Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: /dev/urandom Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: Feb 20 04:51:01 localhost nova_compute[280804]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Feb 20 04:51:01 localhost nova_compute[280804]: 2026-02-20 09:51:01.824 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 20 04:51:01 localhost nova_compute[280804]: 2026-02-20 09:51:01.824 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 20 04:51:01 localhost nova_compute[280804]: 2026-02-20 09:51:01.825 280808 INFO nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Using config drive#033[00m Feb 20 04:51:01 localhost nova_compute[280804]: 2026-02-20 09:51:01.860 280808 DEBUG nova.storage.rbd_utils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:02 localhost nova_compute[280804]: 2026-02-20 09:51:02.299 280808 INFO nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Creating config drive at /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config#033[00m Feb 20 04:51:02 localhost nova_compute[280804]: 2026-02-20 09:51:02.305 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpx6xuqgnn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:02 localhost nova_compute[280804]: 2026-02-20 09:51:02.427 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpx6xuqgnn" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:02 localhost nova_compute[280804]: 2026-02-20 09:51:02.465 280808 DEBUG nova.storage.rbd_utils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:02 localhost nova_compute[280804]: 2026-02-20 09:51:02.470 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config 43720f70-168d-461a-8b52-ba71de6033a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:51:02 localhost nova_compute[280804]: 2026-02-20 09:51:02.688 280808 DEBUG oslo_concurrency.processutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config 43720f70-168d-461a-8b52-ba71de6033a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:02 localhost nova_compute[280804]: 2026-02-20 09:51:02.690 280808 INFO nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Deleting local config drive /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config because it was imported into RBD.#033[00m Feb 20 04:51:02 localhost systemd[1]: Started libvirt secret daemon. Feb 20 04:51:02 localhost systemd-machined[205856]: New machine qemu-1-instance-00000006. Feb 20 04:51:02 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000006. Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.135 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.137 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] VM Resumed (Lifecycle Event)#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.140 280808 DEBUG nova.compute.manager [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.141 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.145 280808 INFO nova.virt.libvirt.driver [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance spawned successfully.#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.146 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.163 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.173 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.179 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.180 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.181 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.181 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.182 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.183 280808 DEBUG nova.virt.libvirt.driver [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.211 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.211 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.212 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] VM Started (Lifecycle Event)#033[00m Feb 20 04:51:03 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:03.221 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:02Z, description=, device_id=8324c5e0-60f9-4e20-b458-6e55d48eccd9, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=63d4c87d-a4e3-43b4-83aa-61fe4c1cd199, ip_allocation=immediate, mac_address=fa:16:3e:fd:e8:da, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=450, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:51:02Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.248 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.251 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.274 280808 INFO nova.compute.manager [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Took 5.39 seconds to spawn the instance on the hypervisor.#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.275 280808 DEBUG nova.compute.manager [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.276 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 20 04:51:03 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:03.325 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:02Z, description=, device_id=b20d7241-b6a4-465d-a2a5-bcf872718400, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4bf8af85-400a-40fa-b850-cc8c5305bcbd, ip_allocation=immediate, mac_address=fa:16:3e:11:87:b0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:50:56Z, description=, dns_domain=, id=fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPsNegativeTestJSON-1302609657-network, port_security_enabled=True, project_id=27062c1e0b7d442e92145af8caaac310, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55398, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=394, status=ACTIVE, subnets=['52e053c8-5a17-4381-a98c-cfbfa691a088'], tags=[], tenant_id=27062c1e0b7d442e92145af8caaac310, updated_at=2026-02-20T09:50:57Z, vlan_transparent=None, network_id=fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, port_security_enabled=False, project_id=27062c1e0b7d442e92145af8caaac310, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=452, status=DOWN, tags=[], tenant_id=27062c1e0b7d442e92145af8caaac310, updated_at=2026-02-20T09:51:02Z on network fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.339 280808 INFO nova.compute.manager [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Took 6.71 seconds to build instance.#033[00m Feb 20 04:51:03 localhost nova_compute[280804]: 2026-02-20 09:51:03.359 280808 DEBUG oslo_concurrency.lockutils [None req-7d963cde-60a2-4808-9fd5-7d0c3ee49cf0 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 6.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s Feb 20 04:51:03 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 5 addresses Feb 20 04:51:03 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:51:03 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:51:03 localhost podman[308713]: 2026-02-20 09:51:03.46056929 +0000 UTC m=+0.071150265 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 20 04:51:03 localhost podman[308745]: 2026-02-20 09:51:03.579099301 +0000 UTC m=+0.048692694 container kill 4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:51:03 localhost dnsmasq[308235]: read /var/lib/neutron/dhcp/fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4/addn_hosts - 1 addresses Feb 20 04:51:03 localhost dnsmasq-dhcp[308235]: read /var/lib/neutron/dhcp/fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4/host Feb 20 04:51:03 localhost dnsmasq-dhcp[308235]: read /var/lib/neutron/dhcp/fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4/opts Feb 20 04:51:03 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:03.703 263745 INFO neutron.agent.dhcp.agent [None req-1e46da55-6fd3-4a75-bd7c-55a783a44280 - - - - - -] DHCP configuration for ports {'63d4c87d-a4e3-43b4-83aa-61fe4c1cd199'} is completed#033[00m Feb 20 04:51:03 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:03.849 263745 INFO neutron.agent.dhcp.agent [None req-89c68b9c-9df6-4f94-88b4-f80225a1d611 - - - - - -] DHCP configuration for ports {'4bf8af85-400a-40fa-b850-cc8c5305bcbd'} is completed#033[00m Feb 20 04:51:04 localhost nova_compute[280804]: 2026-02-20 09:51:04.645 280808 DEBUG oslo_concurrency.lockutils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "43720f70-168d-461a-8b52-ba71de6033a0" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:04 localhost nova_compute[280804]: 2026-02-20 09:51:04.645 280808 DEBUG oslo_concurrency.lockutils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0" acquired by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:04 localhost nova_compute[280804]: 2026-02-20 09:51:04.646 280808 INFO nova.compute.manager [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Shelving#033[00m Feb 20 04:51:04 localhost nova_compute[280804]: 2026-02-20 09:51:04.667 280808 DEBUG nova.virt.libvirt.driver [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m Feb 20 04:51:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:05.205 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:04Z, description=, device_id=bf5809dc-d63e-49cb-96cb-266b8d503e70, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0cd72fd4-13af-4a23-b4f8-8b30cd5e4e23, ip_allocation=immediate, mac_address=fa:16:3e:a4:08:6e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=469, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:51:04Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:51:05 localhost sshd[308773]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:51:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v86: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 89 op/s Feb 20 04:51:05 localhost systemd[1]: tmp-crun.4s2NNS.mount: Deactivated successfully. Feb 20 04:51:05 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 6 addresses Feb 20 04:51:05 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:51:05 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:51:05 localhost podman[308788]: 2026-02-20 09:51:05.442490348 +0000 UTC m=+0.057755745 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Feb 20 04:51:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:05.725 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:02Z, description=, device_id=b20d7241-b6a4-465d-a2a5-bcf872718400, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4bf8af85-400a-40fa-b850-cc8c5305bcbd, ip_allocation=immediate, mac_address=fa:16:3e:11:87:b0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:50:56Z, description=, dns_domain=, id=fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPsNegativeTestJSON-1302609657-network, port_security_enabled=True, project_id=27062c1e0b7d442e92145af8caaac310, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55398, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=394, status=ACTIVE, subnets=['52e053c8-5a17-4381-a98c-cfbfa691a088'], tags=[], tenant_id=27062c1e0b7d442e92145af8caaac310, updated_at=2026-02-20T09:50:57Z, vlan_transparent=None, network_id=fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, port_security_enabled=False, project_id=27062c1e0b7d442e92145af8caaac310, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=452, status=DOWN, tags=[], tenant_id=27062c1e0b7d442e92145af8caaac310, updated_at=2026-02-20T09:51:02Z on network fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4#033[00m Feb 20 04:51:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:05.740 263745 INFO neutron.agent.dhcp.agent [None req-cf255e01-8a60-49e4-a32c-066d184b9fad - - - - - -] DHCP configuration for ports {'0cd72fd4-13af-4a23-b4f8-8b30cd5e4e23'} is completed#033[00m Feb 20 04:51:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:05.917 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:05.918 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:05.918 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:05 localhost dnsmasq[308235]: read /var/lib/neutron/dhcp/fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4/addn_hosts - 1 addresses Feb 20 04:51:05 localhost dnsmasq-dhcp[308235]: read /var/lib/neutron/dhcp/fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4/host Feb 20 04:51:05 localhost podman[308826]: 2026-02-20 09:51:05.935941039 +0000 UTC m=+0.059805011 container kill 4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3) Feb 20 04:51:05 localhost dnsmasq-dhcp[308235]: read /var/lib/neutron/dhcp/fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4/opts Feb 20 04:51:06 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:06.221 263745 INFO neutron.agent.dhcp.agent [None req-9308fcdb-ea11-4a33-883e-845f392cbeef - - - - - -] DHCP configuration for ports {'4bf8af85-400a-40fa-b850-cc8c5305bcbd'} is completed#033[00m Feb 20 04:51:06 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:06.464 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:05Z, description=, device_id=8324c5e0-60f9-4e20-b458-6e55d48eccd9, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1a2e1397-d5fd-40b5-93a9-29fa9ee856d6, ip_allocation=immediate, mac_address=fa:16:3e:e2:39:6a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:50:58Z, description=, dns_domain=, id=2d38d28f-6e3b-40d7-8d0c-e95c89b81845, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-1950505918-network, port_security_enabled=True, project_id=5605ba7cb0df4223b48ebf8a1894cdf1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17964, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=420, status=ACTIVE, subnets=['22e14b0e-85bb-49a7-9abb-df5d7712a89c'], tags=[], tenant_id=5605ba7cb0df4223b48ebf8a1894cdf1, updated_at=2026-02-20T09:50:59Z, vlan_transparent=None, network_id=2d38d28f-6e3b-40d7-8d0c-e95c89b81845, port_security_enabled=False, project_id=5605ba7cb0df4223b48ebf8a1894cdf1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=471, status=DOWN, tags=[], tenant_id=5605ba7cb0df4223b48ebf8a1894cdf1, updated_at=2026-02-20T09:51:06Z on network 2d38d28f-6e3b-40d7-8d0c-e95c89b81845#033[00m Feb 20 04:51:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:51:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:51:06 localhost podman[308864]: 2026-02-20 09:51:06.706134502 +0000 UTC m=+0.074421971 container kill c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d38d28f-6e3b-40d7-8d0c-e95c89b81845, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:51:06 localhost dnsmasq[308558]: read /var/lib/neutron/dhcp/2d38d28f-6e3b-40d7-8d0c-e95c89b81845/addn_hosts - 1 addresses Feb 20 04:51:06 localhost dnsmasq-dhcp[308558]: read /var/lib/neutron/dhcp/2d38d28f-6e3b-40d7-8d0c-e95c89b81845/host Feb 20 04:51:06 localhost dnsmasq-dhcp[308558]: read /var/lib/neutron/dhcp/2d38d28f-6e3b-40d7-8d0c-e95c89b81845/opts Feb 20 04:51:06 localhost podman[308875]: 2026-02-20 09:51:06.773512324 +0000 UTC m=+0.071566325 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 20 04:51:06 localhost ovn_controller[155916]: 2026-02-20T09:51:06Z|00049|memory|INFO|peak resident set size grew 58% in last 2132.3 seconds, from 13016 kB to 20616 kB Feb 20 04:51:06 localhost ovn_controller[155916]: 2026-02-20T09:51:06Z|00050|memory|INFO|idl-cells-OVN_Southbound:8893 idl-cells-Open_vSwitch:1155 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:269 lflow-cache-entries-cache-matches:254 lflow-cache-size-KB:1254 local_datapath_usage-KB:2 ofctrl_desired_flow_usage-KB:482 ofctrl_installed_flow_usage-KB:354 ofctrl_sb_flow_ref_usage-KB:182 Feb 20 04:51:06 localhost podman[308876]: 2026-02-20 09:51:06.835188905 +0000 UTC m=+0.133437591 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 04:51:06 localhost podman[308876]: 2026-02-20 09:51:06.845685935 +0000 UTC m=+0.143934601 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 20 04:51:06 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:51:06 localhost podman[308875]: 2026-02-20 09:51:06.859615758 +0000 UTC m=+0.157669739 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:51:06 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:51:06 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:06.973 263745 INFO neutron.agent.dhcp.agent [None req-dc9c34f8-506d-40b5-aa3e-fc25596968a2 - - - - - -] DHCP configuration for ports {'1a2e1397-d5fd-40b5-93a9-29fa9ee856d6'} is completed#033[00m Feb 20 04:51:07 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:07.265 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:06Z, description=, device_id=11869463-2b1a-4016-a65b-70d38a714c73, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7dd5df7d-c7e7-4641-91f6-7db2c4a98ac2, ip_allocation=immediate, mac_address=fa:16:3e:3d:c6:a3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=472, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:51:06Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:51:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v87: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 89 op/s Feb 20 04:51:07 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 7 addresses Feb 20 04:51:07 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:51:07 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:51:07 localhost podman[308944]: 2026-02-20 09:51:07.478323659 +0000 UTC m=+0.066487780 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:51:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:51:07 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:07.862 263745 INFO neutron.agent.dhcp.agent [None req-56d03c48-2ab9-4264-bd8a-48c08cd35a94 - - - - - -] DHCP configuration for ports {'7dd5df7d-c7e7-4641-91f6-7db2c4a98ac2'} is completed#033[00m Feb 20 04:51:07 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:07.994 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:05Z, description=, device_id=8324c5e0-60f9-4e20-b458-6e55d48eccd9, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1a2e1397-d5fd-40b5-93a9-29fa9ee856d6, ip_allocation=immediate, mac_address=fa:16:3e:e2:39:6a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:50:58Z, description=, dns_domain=, id=2d38d28f-6e3b-40d7-8d0c-e95c89b81845, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-1950505918-network, port_security_enabled=True, project_id=5605ba7cb0df4223b48ebf8a1894cdf1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17964, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=420, status=ACTIVE, subnets=['22e14b0e-85bb-49a7-9abb-df5d7712a89c'], tags=[], tenant_id=5605ba7cb0df4223b48ebf8a1894cdf1, updated_at=2026-02-20T09:50:59Z, vlan_transparent=None, network_id=2d38d28f-6e3b-40d7-8d0c-e95c89b81845, port_security_enabled=False, project_id=5605ba7cb0df4223b48ebf8a1894cdf1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=471, status=DOWN, tags=[], tenant_id=5605ba7cb0df4223b48ebf8a1894cdf1, updated_at=2026-02-20T09:51:06Z on network 2d38d28f-6e3b-40d7-8d0c-e95c89b81845#033[00m Feb 20 04:51:08 localhost dnsmasq[308558]: read /var/lib/neutron/dhcp/2d38d28f-6e3b-40d7-8d0c-e95c89b81845/addn_hosts - 1 addresses Feb 20 04:51:08 localhost podman[308982]: 2026-02-20 09:51:08.194023365 +0000 UTC m=+0.047860492 container kill c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d38d28f-6e3b-40d7-8d0c-e95c89b81845, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:51:08 localhost dnsmasq-dhcp[308558]: read /var/lib/neutron/dhcp/2d38d28f-6e3b-40d7-8d0c-e95c89b81845/host Feb 20 04:51:08 localhost dnsmasq-dhcp[308558]: read /var/lib/neutron/dhcp/2d38d28f-6e3b-40d7-8d0c-e95c89b81845/opts Feb 20 04:51:08 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:08.417 263745 INFO neutron.agent.dhcp.agent [None req-a612ece0-fc9f-4430-b223-d2da801004e0 - - - - - -] DHCP configuration for ports {'1a2e1397-d5fd-40b5-93a9-29fa9ee856d6'} is completed#033[00m Feb 20 04:51:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v88: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s Feb 20 04:51:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v89: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s Feb 20 04:51:11 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:11.701 2 INFO neutron.agent.securitygroups_rpc [None req-12bd9327-2dd3-43c4-b987-ac4cbf3c449a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Security group member updated ['07d2fe18-fbbf-4547-931e-bb55f378bade']#033[00m Feb 20 04:51:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:51:11 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:11.941 263745 INFO neutron.agent.linux.ip_lib [None req-5af1adce-6502-43b4-9367-927e04b9c35a - - - - - -] Device tapce80821b-ff cannot be used as it has no MAC address#033[00m Feb 20 04:51:11 localhost podman[309006]: 2026-02-20 09:51:11.955463087 +0000 UTC m=+0.086644299 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:51:11 localhost kernel: device tapce80821b-ff entered promiscuous mode Feb 20 04:51:11 localhost NetworkManager[5967]: [1771581071.9764] manager: (tapce80821b-ff): new Generic device (/org/freedesktop/NetworkManager/Devices/17) Feb 20 04:51:11 localhost ovn_controller[155916]: 2026-02-20T09:51:11Z|00051|binding|INFO|Claiming lport ce80821b-ff48-45be-bf0a-b5c50acb4f30 for this chassis. Feb 20 04:51:11 localhost ovn_controller[155916]: 2026-02-20T09:51:11Z|00052|binding|INFO|ce80821b-ff48-45be-bf0a-b5c50acb4f30: Claiming unknown Feb 20 04:51:11 localhost systemd-udevd[309038]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:51:11 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:11.988 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b76abbcaaa0b49b3abfcdaf1607439fd', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=543d5e20-d525-4044-b1a1-7b9f1b7dfe15, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ce80821b-ff48-45be-bf0a-b5c50acb4f30) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:11 localhost podman[309006]: 2026-02-20 09:51:11.990824863 +0000 UTC m=+0.122006125 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:51:11 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:11.994 161766 INFO neutron.agent.ovn.metadata.agent [-] Port ce80821b-ff48-45be-bf0a-b5c50acb4f30 in datapath 765f9af1-1e08-4e8a-9dd3-51ccbce49ec5 bound to our chassis#033[00m Feb 20 04:51:11 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:11.997 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 765f9af1-1e08-4e8a-9dd3-51ccbce49ec5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:51:12 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:11.997 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f72263-d464-452f-86c7-05c57874f93e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:12 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:51:12 localhost ovn_controller[155916]: 2026-02-20T09:51:12Z|00053|binding|INFO|Setting lport ce80821b-ff48-45be-bf0a-b5c50acb4f30 ovn-installed in OVS Feb 20 04:51:12 localhost ovn_controller[155916]: 2026-02-20T09:51:12Z|00054|binding|INFO|Setting lport ce80821b-ff48-45be-bf0a-b5c50acb4f30 up in Southbound Feb 20 04:51:12 localhost dnsmasq[308235]: read /var/lib/neutron/dhcp/fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4/addn_hosts - 0 addresses Feb 20 04:51:12 localhost dnsmasq-dhcp[308235]: read /var/lib/neutron/dhcp/fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4/host Feb 20 04:51:12 localhost dnsmasq-dhcp[308235]: read /var/lib/neutron/dhcp/fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4/opts Feb 20 04:51:12 localhost podman[309070]: 2026-02-20 09:51:12.316861655 +0000 UTC m=+0.067039415 container kill 4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:51:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:51:12 localhost ovn_controller[155916]: 2026-02-20T09:51:12Z|00055|binding|INFO|Releasing lport 51f0d734-55af-400a-9d9e-de45acced278 from this chassis (sb_readonly=0) Feb 20 04:51:12 localhost ovn_controller[155916]: 2026-02-20T09:51:12Z|00056|binding|INFO|Setting lport 51f0d734-55af-400a-9d9e-de45acced278 down in Southbound Feb 20 04:51:12 localhost kernel: device tap51f0d734-55 left promiscuous mode Feb 20 04:51:12 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:12.541 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27062c1e0b7d442e92145af8caaac310', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7cfb952-aa76-40f1-97c6-9336f13048f7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=51f0d734-55af-400a-9d9e-de45acced278) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:12 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:12.543 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 51f0d734-55af-400a-9d9e-de45acced278 in datapath fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4 unbound from our chassis#033[00m Feb 20 04:51:12 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:12.547 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:51:12 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:12.548 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[4da23d1c-3f6a-48d7-a8f7-1ccda84fcc26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:12 localhost podman[309129]: Feb 20 04:51:12 localhost podman[309129]: 2026-02-20 09:51:12.94132558 +0000 UTC m=+0.093719128 container create f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:51:12 localhost podman[309129]: 2026-02-20 09:51:12.892564956 +0000 UTC m=+0.044958534 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:51:13 localhost systemd[1]: Started libpod-conmon-f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a.scope. Feb 20 04:51:13 localhost systemd[1]: tmp-crun.w3e7vj.mount: Deactivated successfully. Feb 20 04:51:13 localhost systemd[1]: Started libcrun container. Feb 20 04:51:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/439ae5d9e6f896254e6ddcbcc8f5cdbe007b8c5e98e6c8514f066e5915d7adc7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:51:13 localhost podman[309129]: 2026-02-20 09:51:13.062025819 +0000 UTC m=+0.214419377 container init f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 20 04:51:13 localhost dnsmasq[309147]: started, version 2.85 cachesize 150 Feb 20 04:51:13 localhost dnsmasq[309147]: DNS service limited to local subnets Feb 20 04:51:13 localhost dnsmasq[309147]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:51:13 localhost dnsmasq[309147]: warning: no upstream servers configured Feb 20 04:51:13 localhost dnsmasq-dhcp[309147]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:51:13 localhost dnsmasq[309147]: read /var/lib/neutron/dhcp/765f9af1-1e08-4e8a-9dd3-51ccbce49ec5/addn_hosts - 0 addresses Feb 20 04:51:13 localhost dnsmasq-dhcp[309147]: read /var/lib/neutron/dhcp/765f9af1-1e08-4e8a-9dd3-51ccbce49ec5/host Feb 20 04:51:13 localhost podman[309129]: 2026-02-20 09:51:13.084144841 +0000 UTC m=+0.236538399 container start f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 20 04:51:13 localhost dnsmasq-dhcp[309147]: read /var/lib/neutron/dhcp/765f9af1-1e08-4e8a-9dd3-51ccbce49ec5/opts Feb 20 04:51:13 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:13.275 263745 INFO neutron.agent.dhcp.agent [None req-0abbade6-ba15-4c95-ad32-f77c6fa4846a - - - - - -] DHCP configuration for ports {'36704aed-de77-44b3-a30f-5112ad78a960'} is completed#033[00m Feb 20 04:51:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v90: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s Feb 20 04:51:13 localhost systemd[1]: tmp-crun.miFU1f.mount: Deactivated successfully. Feb 20 04:51:14 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:14.302 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:13Z, description=, device_id=0c6fe606-410d-41ea-af07-351a110d6c70, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=862ef07c-c6d5-41d7-89c1-af40b39fcde5, ip_allocation=immediate, mac_address=fa:16:3e:02:4d:9f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=516, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:51:13Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:51:14 localhost podman[309166]: 2026-02-20 09:51:14.528336615 +0000 UTC m=+0.064696581 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:51:14 localhost systemd[1]: tmp-crun.ZrwK7W.mount: Deactivated successfully. Feb 20 04:51:14 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 8 addresses Feb 20 04:51:14 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:51:14 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:51:14 localhost nova_compute[280804]: 2026-02-20 09:51:14.707 280808 DEBUG nova.virt.libvirt.driver [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m Feb 20 04:51:14 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:14.734 263745 INFO neutron.agent.dhcp.agent [None req-2a717e53-1474-4abb-936b-544904c2dfec - - - - - -] DHCP configuration for ports {'862ef07c-c6d5-41d7-89c1-af40b39fcde5'} is completed#033[00m Feb 20 04:51:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v91: 177 pgs: 177 active+clean; 217 MiB data, 865 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 149 op/s Feb 20 04:51:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e93 do_prune osdmap full prune enabled Feb 20 04:51:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e94 e94: 6 total, 6 up, 6 in Feb 20 04:51:15 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e94: 6 total, 6 up, 6 in Feb 20 04:51:15 localhost systemd[1]: tmp-crun.teBYKT.mount: Deactivated successfully. Feb 20 04:51:15 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 7 addresses Feb 20 04:51:15 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:51:15 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:51:15 localhost podman[309204]: 2026-02-20 09:51:15.778125338 +0000 UTC m=+0.068342850 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:51:16 localhost podman[241347]: time="2026-02-20T09:51:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:51:16 localhost podman[241347]: @ - - [20/Feb/2026:09:51:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 163187 "" "Go-http-client/1.1" Feb 20 04:51:16 localhost podman[241347]: @ - - [20/Feb/2026:09:51:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20187 "" "Go-http-client/1.1" Feb 20 04:51:16 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:16.233 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:16Z, description=, device_id=0c6fe606-410d-41ea-af07-351a110d6c70, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=80288090-e99e-427d-994c-a9b2d1560fe1, ip_allocation=immediate, mac_address=fa:16:3e:9f:db:8e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:51:10Z, description=, dns_domain=, id=765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1949265633-network, port_security_enabled=True, project_id=b76abbcaaa0b49b3abfcdaf1607439fd, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58786, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=494, status=ACTIVE, subnets=['84e8fb8b-9893-4faa-9270-d3888d23a984'], tags=[], tenant_id=b76abbcaaa0b49b3abfcdaf1607439fd, updated_at=2026-02-20T09:51:11Z, vlan_transparent=None, network_id=765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, port_security_enabled=False, project_id=b76abbcaaa0b49b3abfcdaf1607439fd, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=524, status=DOWN, tags=[], tenant_id=b76abbcaaa0b49b3abfcdaf1607439fd, updated_at=2026-02-20T09:51:16Z on network 765f9af1-1e08-4e8a-9dd3-51ccbce49ec5#033[00m Feb 20 04:51:16 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:16.266 2 INFO neutron.agent.securitygroups_rpc [None req-dd3e0c14-3c22-4790-87f5-ba03a5ef1aea ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Security group member updated ['6a912071-fd9c-4d5f-8453-7f993db3506d']#033[00m Feb 20 04:51:16 localhost dnsmasq[309147]: read /var/lib/neutron/dhcp/765f9af1-1e08-4e8a-9dd3-51ccbce49ec5/addn_hosts - 1 addresses Feb 20 04:51:16 localhost dnsmasq-dhcp[309147]: read /var/lib/neutron/dhcp/765f9af1-1e08-4e8a-9dd3-51ccbce49ec5/host Feb 20 04:51:16 localhost dnsmasq-dhcp[309147]: read /var/lib/neutron/dhcp/765f9af1-1e08-4e8a-9dd3-51ccbce49ec5/opts Feb 20 04:51:16 localhost podman[309240]: 2026-02-20 09:51:16.46488052 +0000 UTC m=+0.059338698 container kill f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 20 04:51:16 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:16.723 263745 INFO neutron.agent.dhcp.agent [None req-475e7cad-ff7f-4c4c-8ee6-6b51e8fca117 - - - - - -] DHCP configuration for ports {'80288090-e99e-427d-994c-a9b2d1560fe1'} is completed#033[00m Feb 20 04:51:17 localhost systemd[1]: tmp-crun.7ZiFnZ.mount: Deactivated successfully. Feb 20 04:51:17 localhost podman[309279]: 2026-02-20 09:51:17.143737659 +0000 UTC m=+0.066518539 container kill 4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:51:17 localhost dnsmasq[308235]: exiting on receipt of SIGTERM Feb 20 04:51:17 localhost systemd[1]: libpod-4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd.scope: Deactivated successfully. Feb 20 04:51:17 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:17.188 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:16Z, description=, device_id=0c6fe606-410d-41ea-af07-351a110d6c70, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=80288090-e99e-427d-994c-a9b2d1560fe1, ip_allocation=immediate, mac_address=fa:16:3e:9f:db:8e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:51:10Z, description=, dns_domain=, id=765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1949265633-network, port_security_enabled=True, project_id=b76abbcaaa0b49b3abfcdaf1607439fd, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58786, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=494, status=ACTIVE, subnets=['84e8fb8b-9893-4faa-9270-d3888d23a984'], tags=[], tenant_id=b76abbcaaa0b49b3abfcdaf1607439fd, updated_at=2026-02-20T09:51:11Z, vlan_transparent=None, network_id=765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, port_security_enabled=False, project_id=b76abbcaaa0b49b3abfcdaf1607439fd, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=524, status=DOWN, tags=[], tenant_id=b76abbcaaa0b49b3abfcdaf1607439fd, updated_at=2026-02-20T09:51:16Z on network 765f9af1-1e08-4e8a-9dd3-51ccbce49ec5#033[00m Feb 20 04:51:17 localhost podman[309294]: 2026-02-20 09:51:17.228005494 +0000 UTC m=+0.060902650 container died 4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:51:17 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd-userdata-shm.mount: Deactivated successfully. Feb 20 04:51:17 localhost podman[309294]: 2026-02-20 09:51:17.270698126 +0000 UTC m=+0.103595222 container remove 4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fbb168cc-bb2d-404a-a8d2-bf0aaa99fcf4, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 20 04:51:17 localhost systemd[1]: libpod-conmon-4afced22f2964dc28f76598144d788c251e22d9df2849730be9fadc9c78ec6bd.scope: Deactivated successfully. Feb 20 04:51:17 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:17.319 263745 INFO neutron.agent.dhcp.agent [None req-314ed5a3-15c2-44d4-8b09-959e2c127684 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:51:17 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Deactivated successfully. Feb 20 04:51:17 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Consumed 11.446s CPU time. Feb 20 04:51:17 localhost systemd-machined[205856]: Machine qemu-1-instance-00000006 terminated. Feb 20 04:51:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v93: 177 pgs: 177 active+clean; 217 MiB data, 865 MiB used, 41 GiB / 42 GiB avail; 1.0 MiB/s rd, 2.4 MiB/s wr, 80 op/s Feb 20 04:51:17 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:17.447 2 INFO neutron.agent.securitygroups_rpc [None req-c36d1673-2dec-447b-a8b3-50030e0a0823 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Security group member updated ['07d2fe18-fbbf-4547-931e-bb55f378bade']#033[00m Feb 20 04:51:17 localhost dnsmasq[309147]: read /var/lib/neutron/dhcp/765f9af1-1e08-4e8a-9dd3-51ccbce49ec5/addn_hosts - 1 addresses Feb 20 04:51:17 localhost dnsmasq-dhcp[309147]: read /var/lib/neutron/dhcp/765f9af1-1e08-4e8a-9dd3-51ccbce49ec5/host Feb 20 04:51:17 localhost dnsmasq-dhcp[309147]: read /var/lib/neutron/dhcp/765f9af1-1e08-4e8a-9dd3-51ccbce49ec5/opts Feb 20 04:51:17 localhost podman[309339]: 2026-02-20 09:51:17.459710982 +0000 UTC m=+0.049193976 container kill f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 20 04:51:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:51:17 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:17.594 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:51:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:17.660 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}16c2595f949a870667b19c32cd2c278214ec0ae9cab06f334917fcc2421da3e1" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Feb 20 04:51:17 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:17.666 263745 INFO neutron.agent.dhcp.agent [None req-4258d2b6-3220-4453-a1e0-1de000c111af - - - - - -] DHCP configuration for ports {'80288090-e99e-427d-994c-a9b2d1560fe1'} is completed#033[00m Feb 20 04:51:17 localhost nova_compute[280804]: 2026-02-20 09:51:17.722 280808 INFO nova.virt.libvirt.driver [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance shutdown successfully after 13 seconds.#033[00m Feb 20 04:51:17 localhost nova_compute[280804]: 2026-02-20 09:51:17.729 280808 INFO nova.virt.libvirt.driver [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance destroyed successfully.#033[00m Feb 20 04:51:17 localhost nova_compute[280804]: 2026-02-20 09:51:17.730 280808 DEBUG nova.objects.instance [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lazy-loading 'numa_topology' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:17.744 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 954 Content-Type: application/json Date: Fri, 20 Feb 2026 09:51:17 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-7c6884d6-b761-47c4-95f9-73442950c560 x-openstack-request-id: req-7c6884d6-b761-47c4-95f9-73442950c560 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Feb 20 04:51:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:17.745 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "40a6f41a-8891-4900-942e-688a656af142", "name": "m1.nano", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/40a6f41a-8891-4900-942e-688a656af142"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/40a6f41a-8891-4900-942e-688a656af142"}]}, {"id": "4c5a6e9c-e533-4713-8576-2ed147b60f8c", "name": "m1.micro", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/4c5a6e9c-e533-4713-8576-2ed147b60f8c"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/4c5a6e9c-e533-4713-8576-2ed147b60f8c"}]}, {"id": "739ef37c-e459-414b-b65a-355581d54c7c", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/739ef37c-e459-414b-b65a-355581d54c7c"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/739ef37c-e459-414b-b65a-355581d54c7c"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Feb 20 04:51:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:17.745 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-7c6884d6-b761-47c4-95f9-73442950c560 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Feb 20 04:51:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:17.748 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/40a6f41a-8891-4900-942e-688a656af142 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}16c2595f949a870667b19c32cd2c278214ec0ae9cab06f334917fcc2421da3e1" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Feb 20 04:51:17 localhost nova_compute[280804]: 2026-02-20 09:51:17.798 280808 INFO nova.virt.libvirt.driver [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Beginning cold snapshot process#033[00m Feb 20 04:51:17 localhost nova_compute[280804]: 2026-02-20 09:51:17.979 280808 DEBUG nova.virt.libvirt.imagebackend [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] No parent info for 06bd71fd-c415-45d9-b669-46209b7ca2f4; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m Feb 20 04:51:18 localhost nova_compute[280804]: 2026-02-20 09:51:18.041 280808 DEBUG nova.storage.rbd_utils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] creating snapshot(83a287ee08b248ca8ab2cf3d49234fe0) on rbd image(43720f70-168d-461a-8b52-ba71de6033a0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.068 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 493 Content-Type: application/json Date: Fri, 20 Feb 2026 09:51:17 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-136624b4-48cc-4f13-bdca-bd7372d95167 x-openstack-request-id: req-136624b4-48cc-4f13-bdca-bd7372d95167 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.068 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "40a6f41a-8891-4900-942e-688a656af142", "name": "m1.nano", "ram": 128, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 0, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/40a6f41a-8891-4900-942e-688a656af142"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/40a6f41a-8891-4900-942e-688a656af142"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.069 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/40a6f41a-8891-4900-942e-688a656af142 used request id req-136624b4-48cc-4f13-bdca-bd7372d95167 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.071 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '43720f70-168d-461a-8b52-ba71de6033a0', 'name': 'tempest-UnshelveToHostMultiNodesTest-server-1846377785', 'flavor': {'id': '40a6f41a-8891-4900-942e-688a656af142', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '06bd71fd-c415-45d9-b669-46209b7ca2f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'np0005625202.localdomain', 'OS-EXT-STS:vm_state': 'shutdown', 'tenant_id': 'ff4cacca21b64031adfd6cb25f7e62fc', 'user_id': '65489f8d7cbf42a2960f2d764c16b3f2', 'hostId': '690f9cbd440bb0a8fee4604c96cc99866bb2395d57c6c4134a3068f6', 'status': 'stopped', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.071 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.073 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of disk.device.read.bytes: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.074 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.075 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of cpu: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.076 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.076 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.077 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.078 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.079 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.080 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.081 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.088 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of disk.device.allocation: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.088 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.089 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of network.incoming.bytes.delta: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.089 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.090 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of disk.device.read.requests: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.090 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.091 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of network.outgoing.packets.error: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.091 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.093 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of network.outgoing.bytes: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.094 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.096 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of network.outgoing.bytes.delta: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.096 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.097 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.097 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.099 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.101 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of disk.device.write.requests: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.101 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.102 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of network.incoming.packets: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.103 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.104 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of disk.device.write.bytes: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.105 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.107 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of disk.device.write.latency: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.107 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.108 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of network.incoming.bytes: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.109 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.110 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of disk.device.capacity: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.111 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.113 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of network.incoming.packets.error: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.113 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.115 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of network.outgoing.packets.drop: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.115 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.116 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of disk.device.usage: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.116 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.117 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of disk.device.read.latency: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.118 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.119 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of memory.usage: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.119 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.119 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.119 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.120 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.121 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of network.outgoing.packets: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.121 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Feb 20 04:51:18 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:51:18.122 12 DEBUG ceilometer.compute.pollsters [-] Instance 43720f70-168d-461a-8b52-ba71de6033a0 was shut off while getting sample of network.incoming.packets.drop: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Feb 20 04:51:18 localhost systemd[1]: var-lib-containers-storage-overlay-1aa41850f94ac33bd3503c44890f7c58c23d31663bce144580d72343fc7fc57d-merged.mount: Deactivated successfully. Feb 20 04:51:18 localhost systemd[1]: run-netns-qdhcp\x2dfbb168cc\x2dbb2d\x2d404a\x2da8d2\x2dbf0aaa99fcf4.mount: Deactivated successfully. Feb 20 04:51:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e94 do_prune osdmap full prune enabled Feb 20 04:51:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e95 e95: 6 total, 6 up, 6 in Feb 20 04:51:18 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e95: 6 total, 6 up, 6 in Feb 20 04:51:18 localhost nova_compute[280804]: 2026-02-20 09:51:18.708 280808 DEBUG nova.storage.rbd_utils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] cloning vms/43720f70-168d-461a-8b52-ba71de6033a0_disk@83a287ee08b248ca8ab2cf3d49234fe0 to images/dada1057-f48f-427f-9e19-8ae0a27ed3e8 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Feb 20 04:51:18 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:18.773 2 INFO neutron.agent.securitygroups_rpc [req-3c77ea9c-030b-4c3f-a6b2-e9f761f0d591 req-ae9f50d3-4bb2-48d4-a279-bccc17ebbc38 19c6a0af0d664b5d92fdce6a6ecdbcc4 5ce7589beebc4b9187ac7a68f3264776 - - default default] Security group rule updated ['ddf49fd2-9d36-4d8c-9b90-f70fbafa6560']#033[00m Feb 20 04:51:18 localhost nova_compute[280804]: 2026-02-20 09:51:18.886 280808 DEBUG nova.storage.rbd_utils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] flattening images/dada1057-f48f-427f-9e19-8ae0a27ed3e8 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Feb 20 04:51:19 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:19.365 2 INFO neutron.agent.securitygroups_rpc [req-cf23cb9e-603b-4426-8e72-b88eccda31be req-4c57a8df-f22b-4186-8db6-2e7fa9aa1e7d 19c6a0af0d664b5d92fdce6a6ecdbcc4 5ce7589beebc4b9187ac7a68f3264776 - - default default] Security group rule updated ['ddf49fd2-9d36-4d8c-9b90-f70fbafa6560']#033[00m Feb 20 04:51:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v95: 177 pgs: 177 active+clean; 236 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 5.3 MiB/s rd, 4.1 MiB/s wr, 168 op/s Feb 20 04:51:19 localhost nova_compute[280804]: 2026-02-20 09:51:19.893 280808 DEBUG nova.storage.rbd_utils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] removing snapshot(83a287ee08b248ca8ab2cf3d49234fe0) on rbd image(43720f70-168d-461a-8b52-ba71de6033a0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m Feb 20 04:51:19 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:19.909 2 INFO neutron.agent.securitygroups_rpc [None req-b04d749d-19a2-4f89-bafc-552dc6778fc9 ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Security group member updated ['6a912071-fd9c-4d5f-8453-7f993db3506d']#033[00m Feb 20 04:51:20 localhost sshd[309486]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:51:20 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e95 do_prune osdmap full prune enabled Feb 20 04:51:20 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e96 e96: 6 total, 6 up, 6 in Feb 20 04:51:20 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e96: 6 total, 6 up, 6 in Feb 20 04:51:20 localhost nova_compute[280804]: 2026-02-20 09:51:20.800 280808 DEBUG nova.storage.rbd_utils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] creating snapshot(snap) on rbd image(dada1057-f48f-427f-9e19-8ae0a27ed3e8) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Feb 20 04:51:21 localhost dnsmasq[309147]: read /var/lib/neutron/dhcp/765f9af1-1e08-4e8a-9dd3-51ccbce49ec5/addn_hosts - 0 addresses Feb 20 04:51:21 localhost dnsmasq-dhcp[309147]: read /var/lib/neutron/dhcp/765f9af1-1e08-4e8a-9dd3-51ccbce49ec5/host Feb 20 04:51:21 localhost dnsmasq-dhcp[309147]: read /var/lib/neutron/dhcp/765f9af1-1e08-4e8a-9dd3-51ccbce49ec5/opts Feb 20 04:51:21 localhost podman[309522]: 2026-02-20 09:51:21.347555196 +0000 UTC m=+0.062910024 container kill f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 20 04:51:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v97: 177 pgs: 177 active+clean; 236 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 126 op/s Feb 20 04:51:21 localhost kernel: device tapce80821b-ff left promiscuous mode Feb 20 04:51:21 localhost ovn_controller[155916]: 2026-02-20T09:51:21Z|00057|binding|INFO|Releasing lport ce80821b-ff48-45be-bf0a-b5c50acb4f30 from this chassis (sb_readonly=0) Feb 20 04:51:21 localhost ovn_controller[155916]: 2026-02-20T09:51:21Z|00058|binding|INFO|Setting lport ce80821b-ff48-45be-bf0a-b5c50acb4f30 down in Southbound Feb 20 04:51:21 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:21.582 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b76abbcaaa0b49b3abfcdaf1607439fd', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=543d5e20-d525-4044-b1a1-7b9f1b7dfe15, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ce80821b-ff48-45be-bf0a-b5c50acb4f30) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:21 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:21.584 161766 INFO neutron.agent.ovn.metadata.agent [-] Port ce80821b-ff48-45be-bf0a-b5c50acb4f30 in datapath 765f9af1-1e08-4e8a-9dd3-51ccbce49ec5 unbound from our chassis#033[00m Feb 20 04:51:21 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:21.586 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:51:21 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:21.587 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa043f7-0533-4c71-b4a6-b57e2e4c29ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:51:21 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e96 do_prune osdmap full prune enabled Feb 20 04:51:21 localhost systemd[1]: tmp-crun.nbddSX.mount: Deactivated successfully. Feb 20 04:51:21 localhost podman[309545]: 2026-02-20 09:51:21.722334422 +0000 UTC m=+0.084347947 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:51:21 localhost podman[309545]: 2026-02-20 09:51:21.734776814 +0000 UTC m=+0.096790289 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:51:21 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e97 e97: 6 total, 6 up, 6 in Feb 20 04:51:21 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e97: 6 total, 6 up, 6 in Feb 20 04:51:21 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.484 280808 INFO nova.virt.libvirt.driver [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Snapshot image upload complete#033[00m Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.485 280808 DEBUG nova.compute.manager [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.542 280808 INFO nova.compute.manager [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Shelve offloading#033[00m Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.551 280808 INFO nova.virt.libvirt.driver [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance destroyed successfully.#033[00m Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.552 280808 DEBUG nova.compute.manager [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.554 280808 DEBUG oslo_concurrency.lockutils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.555 280808 DEBUG oslo_concurrency.lockutils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquired lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.555 280808 DEBUG nova.network.neutron [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.595 280808 DEBUG nova.network.neutron [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.687 280808 DEBUG nova.network.neutron [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.708 280808 DEBUG oslo_concurrency.lockutils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Releasing lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.716 280808 INFO nova.virt.libvirt.driver [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance destroyed successfully.#033[00m Feb 20 04:51:22 localhost nova_compute[280804]: 2026-02-20 09:51:22.717 280808 DEBUG nova.objects.instance [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lazy-loading 'resources' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:22 localhost sshd[309586]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:51:23 localhost nova_compute[280804]: 2026-02-20 09:51:23.368 280808 INFO nova.virt.libvirt.driver [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Deleting instance files /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0_del#033[00m Feb 20 04:51:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:51:23 Feb 20 04:51:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:51:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 04:51:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['manila_metadata', 'volumes', 'images', 'vms', 'backups', 'manila_data', '.mgr'] Feb 20 04:51:23 localhost nova_compute[280804]: 2026-02-20 09:51:23.370 280808 INFO nova.virt.libvirt.driver [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Deletion of /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0_del complete#033[00m Feb 20 04:51:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 04:51:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v99: 177 pgs: 177 active+clean; 236 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 126 op/s Feb 20 04:51:23 localhost nova_compute[280804]: 2026-02-20 09:51:23.426 280808 DEBUG nova.virt.libvirt.host [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m Feb 20 04:51:23 localhost nova_compute[280804]: 2026-02-20 09:51:23.427 280808 INFO nova.virt.libvirt.host [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] UEFI support detected#033[00m Feb 20 04:51:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:51:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:51:23 localhost nova_compute[280804]: 2026-02-20 09:51:23.465 280808 INFO nova.scheduler.client.report [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Deleted allocations for instance 43720f70-168d-461a-8b52-ba71de6033a0#033[00m Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006580482708682301 of space, bias 1.0, pg target 1.31609654173646 quantized to 32 (current 32) Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005628366078075563 of space, bias 1.0, pg target 1.120044849537037 quantized to 32 (current 32) Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:51:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019433103015075376 quantized to 16 (current 16) Feb 20 04:51:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:51:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:51:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:51:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:51:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:51:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:51:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:51:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:51:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:51:23 localhost nova_compute[280804]: 2026-02-20 09:51:23.529 280808 DEBUG oslo_concurrency.lockutils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:23 localhost nova_compute[280804]: 2026-02-20 09:51:23.530 280808 DEBUG oslo_concurrency.lockutils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:51:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:51:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:51:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:51:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:51:23 localhost nova_compute[280804]: 2026-02-20 09:51:23.551 280808 DEBUG oslo_concurrency.processutils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:24 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:51:24 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3777396642' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:51:24 localhost nova_compute[280804]: 2026-02-20 09:51:24.013 280808 DEBUG oslo_concurrency.processutils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:24 localhost nova_compute[280804]: 2026-02-20 09:51:24.021 280808 DEBUG nova.compute.provider_tree [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Updating inventory in ProviderTree for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 20 04:51:24 localhost nova_compute[280804]: 2026-02-20 09:51:24.070 280808 DEBUG nova.scheduler.client.report [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Updated inventory for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Feb 20 04:51:24 localhost nova_compute[280804]: 2026-02-20 09:51:24.071 280808 DEBUG nova.compute.provider_tree [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Updating resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Feb 20 04:51:24 localhost nova_compute[280804]: 2026-02-20 09:51:24.071 280808 DEBUG nova.compute.provider_tree [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Updating inventory in ProviderTree for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 20 04:51:24 localhost nova_compute[280804]: 2026-02-20 09:51:24.100 280808 DEBUG oslo_concurrency.lockutils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:24 localhost nova_compute[280804]: 2026-02-20 09:51:24.157 280808 DEBUG oslo_concurrency.lockutils [None req-103f7c14-f009-43fd-a4d6-94ba0f8d20aa 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0" "released" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: held 19.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:24 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 6 addresses Feb 20 04:51:24 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:51:24 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:51:24 localhost systemd[1]: tmp-crun.nNECG1.mount: Deactivated successfully. Feb 20 04:51:24 localhost podman[309626]: 2026-02-20 09:51:24.758481923 +0000 UTC m=+0.072571823 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 20 04:51:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v100: 177 pgs: 177 active+clean; 224 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 10 MiB/s rd, 7.1 MiB/s wr, 301 op/s Feb 20 04:51:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e97 do_prune osdmap full prune enabled Feb 20 04:51:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e98 e98: 6 total, 6 up, 6 in Feb 20 04:51:25 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e98: 6 total, 6 up, 6 in Feb 20 04:51:25 localhost dnsmasq[309147]: exiting on receipt of SIGTERM Feb 20 04:51:25 localhost podman[309664]: 2026-02-20 09:51:25.957520729 +0000 UTC m=+0.069664855 container kill f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:51:25 localhost systemd[1]: libpod-f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a.scope: Deactivated successfully. Feb 20 04:51:26 localhost podman[309678]: 2026-02-20 09:51:26.03608255 +0000 UTC m=+0.061025343 container died f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:51:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a-userdata-shm.mount: Deactivated successfully. Feb 20 04:51:26 localhost systemd[1]: var-lib-containers-storage-overlay-439ae5d9e6f896254e6ddcbcc8f5cdbe007b8c5e98e6c8514f066e5915d7adc7-merged.mount: Deactivated successfully. Feb 20 04:51:26 localhost podman[309678]: 2026-02-20 09:51:26.072098673 +0000 UTC m=+0.097041406 container cleanup f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:51:26 localhost systemd[1]: libpod-conmon-f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a.scope: Deactivated successfully. Feb 20 04:51:26 localhost podman[309680]: 2026-02-20 09:51:26.110293645 +0000 UTC m=+0.130287036 container remove f671d58bc6bd8824a346a90f3b9f8dc8d1508dd847c6fbb467f7e1ea16f2842a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-765f9af1-1e08-4e8a-9dd3-51ccbce49ec5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:51:26 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:26.386 263745 INFO neutron.agent.dhcp.agent [None req-1150d6de-6cde-4ef6-886e-fd23ed64547f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:51:26 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:26.496 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:51:26 localhost nova_compute[280804]: 2026-02-20 09:51:26.738 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Acquiring lock "43720f70-168d-461a-8b52-ba71de6033a0" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:26 localhost nova_compute[280804]: 2026-02-20 09:51:26.739 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0" acquired by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:26 localhost nova_compute[280804]: 2026-02-20 09:51:26.740 280808 INFO nova.compute.manager [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Unshelving#033[00m Feb 20 04:51:26 localhost nova_compute[280804]: 2026-02-20 09:51:26.819 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:26 localhost nova_compute[280804]: 2026-02-20 09:51:26.820 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:26 localhost nova_compute[280804]: 2026-02-20 09:51:26.822 280808 DEBUG nova.objects.instance [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'pci_requests' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:26 localhost nova_compute[280804]: 2026-02-20 09:51:26.840 280808 DEBUG nova.objects.instance [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'numa_topology' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:26 localhost nova_compute[280804]: 2026-02-20 09:51:26.853 280808 DEBUG nova.virt.hardware [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Feb 20 04:51:26 localhost nova_compute[280804]: 2026-02-20 09:51:26.854 280808 INFO nova.compute.claims [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Claim successful on node np0005625202.localdomain#033[00m Feb 20 04:51:26 localhost systemd[1]: run-netns-qdhcp\x2d765f9af1\x2d1e08\x2d4e8a\x2d9dd3\x2d51ccbce49ec5.mount: Deactivated successfully. Feb 20 04:51:26 localhost nova_compute[280804]: 2026-02-20 09:51:26.976 280808 DEBUG oslo_concurrency.processutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:27 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:27.084 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:51:27 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 5 addresses Feb 20 04:51:27 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:51:27 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:51:27 localhost systemd[1]: tmp-crun.HTcXi3.mount: Deactivated successfully. Feb 20 04:51:27 localhost podman[309726]: 2026-02-20 09:51:27.194225651 +0000 UTC m=+0.081201933 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:51:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v102: 177 pgs: 177 active+clean; 224 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 4.4 MiB/s rd, 5.9 MiB/s wr, 192 op/s Feb 20 04:51:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:51:27 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3066824957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.432 280808 DEBUG oslo_concurrency.processutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.438 280808 DEBUG nova.compute.provider_tree [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.459 280808 DEBUG nova.scheduler.client.report [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.482 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:51:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e98 do_prune osdmap full prune enabled Feb 20 04:51:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e99 e99: 6 total, 6 up, 6 in Feb 20 04:51:27 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e99: 6 total, 6 up, 6 in Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.563 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Acquiring lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.564 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Acquired lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.564 280808 DEBUG nova.network.neutron [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.799 280808 DEBUG nova.network.neutron [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.915 280808 DEBUG nova.network.neutron [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.931 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Releasing lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.934 280808 DEBUG nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.934 280808 INFO nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Creating image(s)#033[00m Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.972 280808 DEBUG nova.storage.rbd_utils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:27 localhost nova_compute[280804]: 2026-02-20 09:51:27.976 280808 DEBUG nova.objects.instance [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'trusted_certs' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:28 localhost nova_compute[280804]: 2026-02-20 09:51:28.031 280808 DEBUG nova.storage.rbd_utils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:28 localhost nova_compute[280804]: 2026-02-20 09:51:28.070 280808 DEBUG nova.storage.rbd_utils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:28 localhost nova_compute[280804]: 2026-02-20 09:51:28.077 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Acquiring lock "0c71b7d261c9eca3c175985d05cd0eb8fbd706d4" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:28 localhost nova_compute[280804]: 2026-02-20 09:51:28.078 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "0c71b7d261c9eca3c175985d05cd0eb8fbd706d4" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:28 localhost nova_compute[280804]: 2026-02-20 09:51:28.132 280808 DEBUG nova.virt.libvirt.imagebackend [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Image locations are: [{'url': 'rbd://a8557ee9-b55d-5519-942c-cf8f6172f1d8/images/dada1057-f48f-427f-9e19-8ae0a27ed3e8/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a8557ee9-b55d-5519-942c-cf8f6172f1d8/images/dada1057-f48f-427f-9e19-8ae0a27ed3e8/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Feb 20 04:51:28 localhost openstack_network_exporter[243776]: ERROR 09:51:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:51:28 localhost openstack_network_exporter[243776]: Feb 20 04:51:28 localhost openstack_network_exporter[243776]: ERROR 09:51:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:51:28 localhost openstack_network_exporter[243776]: Feb 20 04:51:28 localhost nova_compute[280804]: 2026-02-20 09:51:28.237 280808 DEBUG nova.virt.libvirt.imagebackend [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Selected location: {'url': 'rbd://a8557ee9-b55d-5519-942c-cf8f6172f1d8/images/dada1057-f48f-427f-9e19-8ae0a27ed3e8/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m Feb 20 04:51:28 localhost nova_compute[280804]: 2026-02-20 09:51:28.237 280808 DEBUG nova.storage.rbd_utils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] cloning images/dada1057-f48f-427f-9e19-8ae0a27ed3e8@snap to None/43720f70-168d-461a-8b52-ba71de6033a0_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Feb 20 04:51:28 localhost nova_compute[280804]: 2026-02-20 09:51:28.416 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "0c71b7d261c9eca3c175985d05cd0eb8fbd706d4" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:28 localhost nova_compute[280804]: 2026-02-20 09:51:28.649 280808 DEBUG nova.objects.instance [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'migration_context' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:28 localhost nova_compute[280804]: 2026-02-20 09:51:28.757 280808 DEBUG nova.storage.rbd_utils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] flattening vms/43720f70-168d-461a-8b52-ba71de6033a0_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Feb 20 04:51:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e99 do_prune osdmap full prune enabled Feb 20 04:51:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e100 e100: 6 total, 6 up, 6 in Feb 20 04:51:28 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e100: 6 total, 6 up, 6 in Feb 20 04:51:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 177 active+clean; 356 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 16 MiB/s rd, 18 MiB/s wr, 555 op/s Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.659 280808 DEBUG nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Image rbd:vms/43720f70-168d-461a-8b52-ba71de6033a0_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.660 280808 DEBUG nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.660 280808 DEBUG nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Ensure instance console log exists: /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.661 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.661 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.662 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.664 280808 DEBUG nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-20T09:51:04Z,direct_url=,disk_format='raw',id=dada1057-f48f-427f-9e19-8ae0a27ed3e8,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1846377785-shelved',owner='ff4cacca21b64031adfd6cb25f7e62fc',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2026-02-20T09:51:22Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '06bd71fd-c415-45d9-b669-46209b7ca2f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.668 280808 WARNING nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.670 280808 DEBUG nova.virt.libvirt.host [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Searching host: 'np0005625202.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.671 280808 DEBUG nova.virt.libvirt.host [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.673 280808 DEBUG nova.virt.libvirt.host [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Searching host: 'np0005625202.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.673 280808 DEBUG nova.virt.libvirt.host [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.674 280808 DEBUG nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.674 280808 DEBUG nova.virt.hardware [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-20T09:49:56Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='40a6f41a-8891-4900-942e-688a656af142',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-20T09:51:04Z,direct_url=,disk_format='raw',id=dada1057-f48f-427f-9e19-8ae0a27ed3e8,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1846377785-shelved',owner='ff4cacca21b64031adfd6cb25f7e62fc',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2026-02-20T09:51:22Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.675 280808 DEBUG nova.virt.hardware [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.675 280808 DEBUG nova.virt.hardware [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.676 280808 DEBUG nova.virt.hardware [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.676 280808 DEBUG nova.virt.hardware [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.677 280808 DEBUG nova.virt.hardware [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.677 280808 DEBUG nova.virt.hardware [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.677 280808 DEBUG nova.virt.hardware [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.678 280808 DEBUG nova.virt.hardware [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.678 280808 DEBUG nova.virt.hardware [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.679 280808 DEBUG nova.virt.hardware [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.679 280808 DEBUG nova.objects.instance [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'vcpu_model' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:29 localhost nova_compute[280804]: 2026-02-20 09:51:29.702 280808 DEBUG oslo_concurrency.processutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 04:51:30 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1957332596' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.161 280808 DEBUG oslo_concurrency.processutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.197 280808 DEBUG nova.storage.rbd_utils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.200 280808 DEBUG oslo_concurrency.processutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:51:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:51:30 localhost podman[310040]: 2026-02-20 09:51:30.456925613 +0000 UTC m=+0.090031809 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 20 04:51:30 localhost podman[310040]: 2026-02-20 09:51:30.501908286 +0000 UTC m=+0.135014442 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:51:30 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:51:30 localhost podman[310039]: 2026-02-20 09:51:30.504477815 +0000 UTC m=+0.141967639 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, vcs-type=git) Feb 20 04:51:30 localhost podman[310039]: 2026-02-20 09:51:30.583616492 +0000 UTC m=+0.221106336 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, config_id=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7) Feb 20 04:51:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 04:51:30 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4245632626' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 04:51:30 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.609 280808 DEBUG oslo_concurrency.processutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.612 280808 DEBUG nova.objects.instance [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'pci_devices' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.633 280808 DEBUG nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] End _get_guest_xml xml= Feb 20 04:51:30 localhost nova_compute[280804]: 43720f70-168d-461a-8b52-ba71de6033a0 Feb 20 04:51:30 localhost nova_compute[280804]: instance-00000006 Feb 20 04:51:30 localhost nova_compute[280804]: 131072 Feb 20 04:51:30 localhost nova_compute[280804]: 1 Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: tempest-UnshelveToHostMultiNodesTest-server-1846377785 Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:29 Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: 128 Feb 20 04:51:30 localhost nova_compute[280804]: 1 Feb 20 04:51:30 localhost nova_compute[280804]: 0 Feb 20 04:51:30 localhost nova_compute[280804]: 0 Feb 20 04:51:30 localhost nova_compute[280804]: 1 Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: tempest-UnshelveToHostMultiNodesTest-1217794180-project-member Feb 20 04:51:30 localhost nova_compute[280804]: tempest-UnshelveToHostMultiNodesTest-1217794180 Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: RDO Feb 20 04:51:30 localhost nova_compute[280804]: OpenStack Compute Feb 20 04:51:30 localhost nova_compute[280804]: 27.5.2-0.20260127144738.eaa65f0.el9 Feb 20 04:51:30 localhost nova_compute[280804]: 43720f70-168d-461a-8b52-ba71de6033a0 Feb 20 04:51:30 localhost nova_compute[280804]: 43720f70-168d-461a-8b52-ba71de6033a0 Feb 20 04:51:30 localhost nova_compute[280804]: Virtual Machine Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: hvm Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: /dev/urandom Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: Feb 20 04:51:30 localhost nova_compute[280804]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.725 280808 DEBUG nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.726 280808 DEBUG nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.727 280808 INFO nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Using config drive#033[00m Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.765 280808 DEBUG nova.storage.rbd_utils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.787 280808 DEBUG nova.objects.instance [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'ec2_ids' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.829 280808 DEBUG nova.objects.instance [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'keypairs' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e100 do_prune osdmap full prune enabled Feb 20 04:51:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e101 e101: 6 total, 6 up, 6 in Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.910 280808 INFO nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Creating config drive at /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config#033[00m Feb 20 04:51:30 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e101: 6 total, 6 up, 6 in Feb 20 04:51:30 localhost nova_compute[280804]: 2026-02-20 09:51:30.920 280808 DEBUG oslo_concurrency.processutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9zfa250o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.054 280808 DEBUG oslo_concurrency.processutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9zfa250o" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.094 280808 DEBUG nova.storage.rbd_utils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.100 280808 DEBUG oslo_concurrency.processutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config 43720f70-168d-461a-8b52-ba71de6033a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.315 280808 DEBUG oslo_concurrency.processutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config 43720f70-168d-461a-8b52-ba71de6033a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.316 280808 INFO nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Deleting local config drive /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config because it was imported into RBD.#033[00m Feb 20 04:51:31 localhost systemd-machined[205856]: New machine qemu-2-instance-00000006. Feb 20 04:51:31 localhost systemd[1]: Started Virtual Machine qemu-2-instance-00000006. Feb 20 04:51:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 177 active+clean; 356 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 12 MiB/s rd, 13 MiB/s wr, 364 op/s Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.749 280808 DEBUG nova.virt.libvirt.host [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Removed pending event for 43720f70-168d-461a-8b52-ba71de6033a0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.750 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.750 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] VM Resumed (Lifecycle Event)#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.756 280808 DEBUG nova.compute.manager [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.756 280808 DEBUG nova.virt.libvirt.driver [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.760 280808 INFO nova.virt.libvirt.driver [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance spawned successfully.#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.782 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.786 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.820 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.820 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.820 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] VM Started (Lifecycle Event)#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.836 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.839 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 20 04:51:31 localhost nova_compute[280804]: 2026-02-20 09:51:31.861 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 20 04:51:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:51:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e101 do_prune osdmap full prune enabled Feb 20 04:51:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e102 e102: 6 total, 6 up, 6 in Feb 20 04:51:32 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e102: 6 total, 6 up, 6 in Feb 20 04:51:33 localhost nova_compute[280804]: 2026-02-20 09:51:33.322 280808 DEBUG nova.compute.manager [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v109: 177 pgs: 177 active+clean; 356 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 347 op/s Feb 20 04:51:33 localhost nova_compute[280804]: 2026-02-20 09:51:33.409 280808 DEBUG oslo_concurrency.lockutils [None req-cf7ea237-e2f8-4515-bc1f-281a107b2c4c a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0" "released" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: held 6.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:34 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:34.285 263745 INFO neutron.agent.linux.ip_lib [None req-c3228805-1eaa-4dde-913e-3eb9701aab4d - - - - - -] Device tapfbed669d-f7 cannot be used as it has no MAC address#033[00m Feb 20 04:51:34 localhost kernel: device tapfbed669d-f7 entered promiscuous mode Feb 20 04:51:34 localhost NetworkManager[5967]: [1771581094.3576] manager: (tapfbed669d-f7): new Generic device (/org/freedesktop/NetworkManager/Devices/18) Feb 20 04:51:34 localhost systemd-udevd[310194]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:51:34 localhost ovn_controller[155916]: 2026-02-20T09:51:34Z|00059|binding|INFO|Claiming lport fbed669d-f70a-4531-87e3-8e1d261d93fd for this chassis. Feb 20 04:51:34 localhost ovn_controller[155916]: 2026-02-20T09:51:34Z|00060|binding|INFO|fbed669d-f70a-4531-87e3-8e1d261d93fd: Claiming unknown Feb 20 04:51:34 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:34.368 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-9ac533b5-90af-4cbe-be32-55de197d993c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ac533b5-90af-4cbe-be32-55de197d993c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299da1b635f4dafbe62328983ad1fae', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3caacf22-c5be-43ca-a327-69c0016b52bc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fbed669d-f70a-4531-87e3-8e1d261d93fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:34 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:34.369 161766 INFO neutron.agent.ovn.metadata.agent [-] Port fbed669d-f70a-4531-87e3-8e1d261d93fd in datapath 9ac533b5-90af-4cbe-be32-55de197d993c bound to our chassis#033[00m Feb 20 04:51:34 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:34.371 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ac533b5-90af-4cbe-be32-55de197d993c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:51:34 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:34.375 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[53e69ae3-1e34-4428-9706-3f200cbdb725]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:34 localhost journal[229367]: ethtool ioctl error on tapfbed669d-f7: No such device Feb 20 04:51:34 localhost journal[229367]: ethtool ioctl error on tapfbed669d-f7: No such device Feb 20 04:51:34 localhost ovn_controller[155916]: 2026-02-20T09:51:34Z|00061|binding|INFO|Setting lport fbed669d-f70a-4531-87e3-8e1d261d93fd ovn-installed in OVS Feb 20 04:51:34 localhost ovn_controller[155916]: 2026-02-20T09:51:34Z|00062|binding|INFO|Setting lport fbed669d-f70a-4531-87e3-8e1d261d93fd up in Southbound Feb 20 04:51:34 localhost journal[229367]: ethtool ioctl error on tapfbed669d-f7: No such device Feb 20 04:51:34 localhost journal[229367]: ethtool ioctl error on tapfbed669d-f7: No such device Feb 20 04:51:34 localhost journal[229367]: ethtool ioctl error on tapfbed669d-f7: No such device Feb 20 04:51:34 localhost journal[229367]: ethtool ioctl error on tapfbed669d-f7: No such device Feb 20 04:51:34 localhost journal[229367]: ethtool ioctl error on tapfbed669d-f7: No such device Feb 20 04:51:34 localhost journal[229367]: ethtool ioctl error on tapfbed669d-f7: No such device Feb 20 04:51:34 localhost nova_compute[280804]: 2026-02-20 09:51:34.459 280808 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Creating tmpfile /var/lib/nova/instances/tmpm4ea8dj2 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Feb 20 04:51:34 localhost nova_compute[280804]: 2026-02-20 09:51:34.479 280808 DEBUG nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm4ea8dj2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Feb 20 04:51:34 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:34.498 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:34Z, description=, device_id=ae5f315b-79d2-4264-afec-ecf48cf37c1f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9a82aea3-5c19-4f11-9ee3-e866d7045d3f, ip_allocation=immediate, mac_address=fa:16:3e:74:ee:b9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=617, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:51:34Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:51:34 localhost nova_compute[280804]: 2026-02-20 09:51:34.508 280808 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:51:34 localhost nova_compute[280804]: 2026-02-20 09:51:34.508 280808 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:51:34 localhost nova_compute[280804]: 2026-02-20 09:51:34.518 280808 INFO nova.compute.rpcapi [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m Feb 20 04:51:34 localhost nova_compute[280804]: 2026-02-20 09:51:34.519 280808 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:51:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:51:34 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:51:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:51:34 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:51:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:51:34 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:51:34 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev f1814fa7-a9f9-4855-a40d-43cfd844d977 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:51:34 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev f1814fa7-a9f9-4855-a40d-43cfd844d977 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:51:34 localhost ceph-mgr[286565]: [progress INFO root] Completed event f1814fa7-a9f9-4855-a40d-43cfd844d977 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:51:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:51:34 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:51:34 localhost systemd[1]: tmp-crun.yNCxqz.mount: Deactivated successfully. Feb 20 04:51:34 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 6 addresses Feb 20 04:51:34 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:51:34 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:51:34 localhost podman[310345]: 2026-02-20 09:51:34.752535236 +0000 UTC m=+0.069678184 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:51:34 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:51:34 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0. Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:34.978823) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40 Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581094978866, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2223, "num_deletes": 254, "total_data_size": 2414621, "memory_usage": 2473584, "flush_reason": "Manual Compaction"} Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581094994727, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 2342375, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22771, "largest_seqno": 24992, "table_properties": {"data_size": 2333652, "index_size": 5356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19191, "raw_average_key_size": 20, "raw_value_size": 2315578, "raw_average_value_size": 2522, "num_data_blocks": 236, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580911, "oldest_key_time": 1771580911, "file_creation_time": 1771581094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}} Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 16294 microseconds, and 6594 cpu microseconds. Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:34.995116) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 2342375 bytes OK Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:34.995253) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:34.997278) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:34.997299) EVENT_LOG_v1 {"time_micros": 1771581094997293, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:34.997324) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 2405406, prev total WAL file size 2405406, number of live WAL files 2. Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:34.998731) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end) Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(2287KB)], [39(17MB)] Feb 20 04:51:34 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581094998774, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 20810805, "oldest_snapshot_seqno": -1} Feb 20 04:51:35 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:35.020 263745 INFO neutron.agent.dhcp.agent [None req-7ccf85dc-eea9-4da5-b168-061e67ceb002 - - - - - -] DHCP configuration for ports {'9a82aea3-5c19-4f11-9ee3-e866d7045d3f'} is completed#033[00m Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 12288 keys, 18890371 bytes, temperature: kUnknown Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581095054162, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 18890371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18818351, "index_size": 40239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 327832, "raw_average_key_size": 26, "raw_value_size": 18607131, "raw_average_value_size": 1514, "num_data_blocks": 1543, "num_entries": 12288, "num_filter_entries": 12288, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771581094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}} Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:35.054351) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 18890371 bytes Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:35.055708) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 375.4 rd, 340.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 17.6 +0.0 blob) out(18.0 +0.0 blob), read-write-amplify(16.9) write-amplify(8.1) OK, records in: 12818, records dropped: 530 output_compression: NoCompression Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:35.055722) EVENT_LOG_v1 {"time_micros": 1771581095055716, "job": 22, "event": "compaction_finished", "compaction_time_micros": 55439, "compaction_time_cpu_micros": 25919, "output_level": 6, "num_output_files": 1, "total_output_size": 18890371, "num_input_records": 12818, "num_output_records": 12288, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581095055936, "job": 22, "event": "table_file_deletion", "file_number": 41} Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581095057131, "job": 22, "event": "table_file_deletion", "file_number": 39} Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:34.998687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:35.057153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:35.057157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:35.057158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:35.057160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:51:35 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:51:35.057161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:51:35 localhost podman[310399]: Feb 20 04:51:35 localhost podman[310399]: 2026-02-20 09:51:35.230638382 +0000 UTC m=+0.090464283 container create e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:51:35 localhost nova_compute[280804]: 2026-02-20 09:51:35.263 280808 DEBUG nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm4ea8dj2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e6ab74b8-b495-4363-8d40-2356596c895c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Feb 20 04:51:35 localhost systemd[1]: Started libpod-conmon-e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589.scope. Feb 20 04:51:35 localhost nova_compute[280804]: 2026-02-20 09:51:35.283 280808 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquiring lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:51:35 localhost nova_compute[280804]: 2026-02-20 09:51:35.284 280808 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquired lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:51:35 localhost nova_compute[280804]: 2026-02-20 09:51:35.284 280808 DEBUG nova.network.neutron [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 20 04:51:35 localhost podman[310399]: 2026-02-20 09:51:35.186369922 +0000 UTC m=+0.046195883 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:51:35 localhost systemd[1]: Started libcrun container. Feb 20 04:51:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869a1298b3fee34374f704128979138a71e2b6c6e81d0ace36f301dc5362c878/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:51:35 localhost podman[310399]: 2026-02-20 09:51:35.31060043 +0000 UTC m=+0.170426351 container init e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:51:35 localhost podman[310399]: 2026-02-20 09:51:35.324248466 +0000 UTC m=+0.184074417 container start e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:51:35 localhost dnsmasq[310417]: started, version 2.85 cachesize 150 Feb 20 04:51:35 localhost dnsmasq[310417]: DNS service limited to local subnets Feb 20 04:51:35 localhost dnsmasq[310417]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:51:35 localhost dnsmasq[310417]: warning: no upstream servers configured Feb 20 04:51:35 localhost dnsmasq-dhcp[310417]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:51:35 localhost dnsmasq[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/addn_hosts - 0 addresses Feb 20 04:51:35 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/host Feb 20 04:51:35 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/opts Feb 20 04:51:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v110: 177 pgs: 177 active+clean; 318 MiB data, 1000 MiB used, 41 GiB / 42 GiB avail; 20 MiB/s rd, 14 MiB/s wr, 805 op/s Feb 20 04:51:35 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:35.470 263745 INFO neutron.agent.dhcp.agent [None req-3e1c6fb0-327f-42d2-b9d2-7eec22756e63 - - - - - -] DHCP configuration for ports {'5e11c139-d549-4c68-b7a3-f8aaa8dc6cd2'} is completed#033[00m Feb 20 04:51:35 localhost nova_compute[280804]: 2026-02-20 09:51:35.476 280808 DEBUG oslo_concurrency.lockutils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "43720f70-168d-461a-8b52-ba71de6033a0" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:35 localhost nova_compute[280804]: 2026-02-20 09:51:35.476 280808 DEBUG oslo_concurrency.lockutils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0" acquired by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:35 localhost nova_compute[280804]: 2026-02-20 09:51:35.477 280808 INFO nova.compute.manager [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Shelving#033[00m Feb 20 04:51:35 localhost nova_compute[280804]: 2026-02-20 09:51:35.502 280808 DEBUG nova.virt.libvirt.driver [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.033 280808 DEBUG nova.virt.libvirt.driver [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Creating tmpfile /var/lib/nova/instances/tmp4xew7c85 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.034 280808 DEBUG nova.compute.manager [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp4xew7c85',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.155 280808 DEBUG nova.network.neutron [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Updating instance_info_cache with network_info: [{"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.194 280808 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Releasing lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.197 280808 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm4ea8dj2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e6ab74b8-b495-4363-8d40-2356596c895c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.197 280808 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Creating instance directory: /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.198 280808 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Ensure instance console log exists: /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.198 280808 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.200 280808 DEBUG nova.virt.libvirt.vif [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-20T09:51:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1557569525',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005625203.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-1557569525',id=7,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2026-02-20T09:51:31Z,launched_on='np0005625203.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005625203.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a966116e4ddf4bdea0571a1bb751916e',ramdisk_id='',reservation_id='r-ty4xcmp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-425062890',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-425062890-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2026-02-20T09:51:31Z,user_data=None,user_id='0db48d5f6f5e44fc93154cf4b34a94e0',uuid=e6ab74b8-b495-4363-8d40-2356596c895c,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.200 280808 DEBUG nova.network.os_vif_util [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Converting VIF {"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.201 280808 DEBUG nova.network.os_vif_util [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.202 280808 DEBUG os_vif [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Feb 20 04:51:36 localhost systemd[1]: tmp-crun.tyjsqg.mount: Deactivated successfully. Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.252 280808 DEBUG ovsdbapp.backend.ovs_idl [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.252 280808 DEBUG ovsdbapp.backend.ovs_idl [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.253 280808 DEBUG ovsdbapp.backend.ovs_idl [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.253 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.253 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.254 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.255 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.256 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.260 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.277 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.277 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.278 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.279 280808 INFO oslo.privsep.daemon [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpj5w1qbse/privsep.sock']#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.396 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.900 280808 INFO oslo.privsep.daemon [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Spawned new privsep daemon via rootwrap#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.790 310422 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.795 310422 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.798 310422 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Feb 20 04:51:36 localhost nova_compute[280804]: 2026-02-20 09:51:36.798 310422 INFO oslo.privsep.daemon [-] privsep daemon running as pid 310422#033[00m Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.205 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.205 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89472e1e-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.205 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89472e1e-6c, col_values=(('external_ids', {'iface-id': '89472e1e-6ca6-404e-8ec3-7651099fb248', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:f6:87', 'vm-uuid': 'e6ab74b8-b495-4363-8d40-2356596c895c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.207 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.209 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.212 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.213 280808 INFO os_vif [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c')#033[00m Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.214 280808 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.214 280808 DEBUG nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm4ea8dj2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e6ab74b8-b495-4363-8d40-2356596c895c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Feb 20 04:51:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:51:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:51:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v111: 177 pgs: 177 active+clean; 318 MiB data, 1000 MiB used, 41 GiB / 42 GiB avail; 7.7 MiB/s rd, 2.4 MiB/s wr, 403 op/s Feb 20 04:51:37 localhost podman[310429]: 2026-02-20 09:51:37.43953901 +0000 UTC m=+0.081390128 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent) Feb 20 04:51:37 localhost podman[310428]: 2026-02-20 09:51:37.492093473 +0000 UTC m=+0.134248429 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:51:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:51:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e102 do_prune osdmap full prune enabled Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.522 280808 DEBUG nova.compute.manager [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp4xew7c85',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='90eb8d1f-8d13-4395-9d15-67fdaa60632d',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Feb 20 04:51:37 localhost podman[310429]: 2026-02-20 09:51:37.52029662 +0000 UTC m=+0.162147778 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 20 04:51:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e103 e103: 6 total, 6 up, 6 in Feb 20 04:51:37 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e103: 6 total, 6 up, 6 in Feb 20 04:51:37 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.550 280808 DEBUG oslo_concurrency.lockutils [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Acquiring lock "refresh_cache-90eb8d1f-8d13-4395-9d15-67fdaa60632d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.551 280808 DEBUG oslo_concurrency.lockutils [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Acquired lock "refresh_cache-90eb8d1f-8d13-4395-9d15-67fdaa60632d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:51:37 localhost nova_compute[280804]: 2026-02-20 09:51:37.551 280808 DEBUG nova.network.neutron [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 20 04:51:37 localhost podman[310428]: 2026-02-20 09:51:37.571800124 +0000 UTC m=+0.213955090 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:51:37 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:51:37 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:37.606 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:37Z, description=, device_id=b8b60d70-ea3f-49b3-b747-8e92d7d324e7, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b4c55077-de9f-41d3-92f8-9146bdfbfaef, ip_allocation=immediate, mac_address=fa:16:3e:0b:46:4a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=634, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:51:37Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:51:38 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 7 addresses Feb 20 04:51:38 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:51:38 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:51:38 localhost podman[310486]: 2026-02-20 09:51:38.06591774 +0000 UTC m=+0.064492294 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:51:38 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:38.247 263745 INFO neutron.agent.dhcp.agent [None req-5edd83bb-2bda-4625-a675-bf46af994caf - - - - - -] DHCP configuration for ports {'b4c55077-de9f-41d3-92f8-9146bdfbfaef'} is completed#033[00m Feb 20 04:51:38 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:51:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:51:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:51:38 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.866 280808 DEBUG nova.network.neutron [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Updating instance_info_cache with network_info: [{"id": "609a0699-8716-4bf8-9f50-bfeec5f65721", "address": "fa:16:3e:c0:a3:f9", "network": {"id": "51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0", "bridge": "br-int", "label": "tempest-LiveMigrationTest-982155183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "e704aae5b1ba49d59262f9aa0c366fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609a0699-87", "ovs_interfaceid": "609a0699-8716-4bf8-9f50-bfeec5f65721", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.887 280808 DEBUG oslo_concurrency.lockutils [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Releasing lock "refresh_cache-90eb8d1f-8d13-4395-9d15-67fdaa60632d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.889 280808 DEBUG nova.virt.libvirt.driver [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp4xew7c85',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='90eb8d1f-8d13-4395-9d15-67fdaa60632d',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.890 280808 DEBUG nova.virt.libvirt.driver [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Creating instance directory: /var/lib/nova/instances/90eb8d1f-8d13-4395-9d15-67fdaa60632d pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.891 280808 DEBUG nova.virt.libvirt.driver [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Ensure instance console log exists: /var/lib/nova/instances/90eb8d1f-8d13-4395-9d15-67fdaa60632d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.891 280808 DEBUG nova.virt.libvirt.driver [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.892 280808 DEBUG nova.virt.libvirt.vif [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-20T09:51:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-721665546',display_name='tempest-LiveMigrationTest-server-721665546',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005625204.localdomain',hostname='tempest-livemigrationtest-server-721665546',id=8,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2026-02-20T09:51:32Z,launched_on='np0005625204.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005625204.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e704aae5b1ba49d59262f9aa0c366fb4',ramdisk_id='',reservation_id='r-erbwo03j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-2108133970',owner_user_name='tempest-LiveMigrationTest-2108133970-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2026-02-20T09:51:32Z,user_data=None,user_id='ba15d0e9919d4594a2e6e9d6b3414a5e',uuid=90eb8d1f-8d13-4395-9d15-67fdaa60632d,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "609a0699-8716-4bf8-9f50-bfeec5f65721", "address": "fa:16:3e:c0:a3:f9", "network": {"id": "51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0", "bridge": "br-int", "label": "tempest-LiveMigrationTest-982155183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "e704aae5b1ba49d59262f9aa0c366fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap609a0699-87", "ovs_interfaceid": "609a0699-8716-4bf8-9f50-bfeec5f65721", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.893 280808 DEBUG nova.network.os_vif_util [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Converting VIF {"id": "609a0699-8716-4bf8-9f50-bfeec5f65721", "address": "fa:16:3e:c0:a3:f9", "network": {"id": "51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0", "bridge": "br-int", "label": "tempest-LiveMigrationTest-982155183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "e704aae5b1ba49d59262f9aa0c366fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap609a0699-87", "ovs_interfaceid": "609a0699-8716-4bf8-9f50-bfeec5f65721", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.894 280808 DEBUG nova.network.os_vif_util [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:a3:f9,bridge_name='br-int',has_traffic_filtering=True,id=609a0699-8716-4bf8-9f50-bfeec5f65721,network=Network(51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609a0699-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.894 280808 DEBUG os_vif [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:a3:f9,bridge_name='br-int',has_traffic_filtering=True,id=609a0699-8716-4bf8-9f50-bfeec5f65721,network=Network(51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609a0699-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.895 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.895 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.896 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.899 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.899 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap609a0699-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.900 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap609a0699-87, col_values=(('external_ids', {'iface-id': '609a0699-8716-4bf8-9f50-bfeec5f65721', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:a3:f9', 'vm-uuid': '90eb8d1f-8d13-4395-9d15-67fdaa60632d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.901 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.904 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.908 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.909 280808 INFO os_vif [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:a3:f9,bridge_name='br-int',has_traffic_filtering=True,id=609a0699-8716-4bf8-9f50-bfeec5f65721,network=Network(51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609a0699-87')#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.910 280808 DEBUG nova.virt.libvirt.driver [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.910 280808 DEBUG nova.compute.manager [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp4xew7c85',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='90eb8d1f-8d13-4395-9d15-67fdaa60632d',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Feb 20 04:51:38 localhost nova_compute[280804]: 2026-02-20 09:51:38.940 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v113: 177 pgs: 177 active+clean; 318 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 8.7 MiB/s rd, 2.4 MiB/s wr, 435 op/s Feb 20 04:51:39 localhost nova_compute[280804]: 2026-02-20 09:51:39.620 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:40 localhost nova_compute[280804]: 2026-02-20 09:51:40.454 280808 DEBUG nova.network.neutron [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Port 89472e1e-6ca6-404e-8ec3-7651099fb248 updated with migration profile {'migrating_to': 'np0005625202.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Feb 20 04:51:40 localhost nova_compute[280804]: 2026-02-20 09:51:40.457 280808 DEBUG nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm4ea8dj2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e6ab74b8-b495-4363-8d40-2356596c895c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Feb 20 04:51:40 localhost sshd[310511]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:51:40 localhost systemd-logind[760]: New session 73 of user nova. Feb 20 04:51:40 localhost systemd[1]: Created slice User Slice of UID 42436. Feb 20 04:51:40 localhost systemd[1]: Starting User Runtime Directory /run/user/42436... Feb 20 04:51:40 localhost systemd[1]: Finished User Runtime Directory /run/user/42436. Feb 20 04:51:40 localhost systemd[1]: Starting User Manager for UID 42436... Feb 20 04:51:40 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:40.901 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:40Z, description=, device_id=330257d1-c627-4905-9230-185815fc6ffb, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7ae3e5b9-4eec-4a29-8b67-96090151ec43, ip_allocation=immediate, mac_address=fa:16:3e:71:28:12, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=647, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:51:40Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:51:40 localhost systemd[310515]: Queued start job for default target Main User Target. Feb 20 04:51:40 localhost systemd[310515]: Created slice User Application Slice. Feb 20 04:51:40 localhost systemd[310515]: Started Mark boot as successful after the user session has run 2 minutes. Feb 20 04:51:40 localhost systemd[310515]: Started Daily Cleanup of User's Temporary Directories. Feb 20 04:51:40 localhost systemd[310515]: Reached target Paths. Feb 20 04:51:40 localhost systemd[310515]: Reached target Timers. Feb 20 04:51:40 localhost systemd[310515]: Starting D-Bus User Message Bus Socket... Feb 20 04:51:40 localhost systemd[310515]: Starting Create User's Volatile Files and Directories... Feb 20 04:51:40 localhost systemd[310515]: Finished Create User's Volatile Files and Directories. Feb 20 04:51:40 localhost systemd[310515]: Listening on D-Bus User Message Bus Socket. Feb 20 04:51:40 localhost systemd[310515]: Reached target Sockets. Feb 20 04:51:40 localhost systemd[310515]: Reached target Basic System. Feb 20 04:51:40 localhost systemd[310515]: Reached target Main User Target. Feb 20 04:51:40 localhost systemd[310515]: Startup finished in 168ms. Feb 20 04:51:40 localhost systemd[1]: Started User Manager for UID 42436. Feb 20 04:51:41 localhost systemd[1]: Started Session 73 of User nova. Feb 20 04:51:41 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:41.047 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:40Z, description=, device_id=b8b60d70-ea3f-49b3-b747-8e92d7d324e7, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=03392061-da7b-4f83-bc5e-24e18d9d666e, ip_allocation=immediate, mac_address=fa:16:3e:02:01:89, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:51:31Z, description=, dns_domain=, id=9ac533b5-90af-4cbe-be32-55de197d993c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-596806407-network, port_security_enabled=True, project_id=f299da1b635f4dafbe62328983ad1fae, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48730, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=606, status=ACTIVE, subnets=['12a57feb-547d-47f0-b3aa-28f3df8f6f52'], tags=[], tenant_id=f299da1b635f4dafbe62328983ad1fae, updated_at=2026-02-20T09:51:33Z, vlan_transparent=None, network_id=9ac533b5-90af-4cbe-be32-55de197d993c, port_security_enabled=False, project_id=f299da1b635f4dafbe62328983ad1fae, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=648, status=DOWN, tags=[], tenant_id=f299da1b635f4dafbe62328983ad1fae, updated_at=2026-02-20T09:51:40Z on network 9ac533b5-90af-4cbe-be32-55de197d993c#033[00m Feb 20 04:51:41 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 8 addresses Feb 20 04:51:41 localhost podman[310548]: 2026-02-20 09:51:41.139332847 +0000 UTC m=+0.080618547 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:51:41 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:51:41 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:51:41 localhost kernel: tun: Universal TUN/TAP device driver, 1.6 Feb 20 04:51:41 localhost kernel: device tap89472e1e-6c entered promiscuous mode Feb 20 04:51:41 localhost NetworkManager[5967]: [1771581101.1762] manager: (tap89472e1e-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/19) Feb 20 04:51:41 localhost systemd-udevd[310589]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:51:41 localhost nova_compute[280804]: 2026-02-20 09:51:41.182 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:41 localhost ovn_controller[155916]: 2026-02-20T09:51:41Z|00063|binding|INFO|Claiming lport 89472e1e-6ca6-404e-8ec3-7651099fb248 for this additional chassis. Feb 20 04:51:41 localhost ovn_controller[155916]: 2026-02-20T09:51:41Z|00064|binding|INFO|89472e1e-6ca6-404e-8ec3-7651099fb248: Claiming fa:16:3e:00:f6:87 10.100.0.6 Feb 20 04:51:41 localhost ovn_controller[155916]: 2026-02-20T09:51:41Z|00065|binding|INFO|Claiming lport 533acac2-f7ea-4ecb-b927-c6780a91a0a2 for this additional chassis. Feb 20 04:51:41 localhost ovn_controller[155916]: 2026-02-20T09:51:41Z|00066|binding|INFO|533acac2-f7ea-4ecb-b927-c6780a91a0a2: Claiming fa:16:3e:94:06:ec 19.80.0.250 Feb 20 04:51:41 localhost NetworkManager[5967]: [1771581101.2008] device (tap89472e1e-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 20 04:51:41 localhost NetworkManager[5967]: [1771581101.2032] device (tap89472e1e-6c): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Feb 20 04:51:41 localhost systemd-machined[205856]: New machine qemu-3-instance-00000007. Feb 20 04:51:41 localhost systemd[1]: Started Virtual Machine qemu-3-instance-00000007. Feb 20 04:51:41 localhost nova_compute[280804]: 2026-02-20 09:51:41.232 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:41 localhost ovn_controller[155916]: 2026-02-20T09:51:41Z|00067|binding|INFO|Setting lport 89472e1e-6ca6-404e-8ec3-7651099fb248 ovn-installed in OVS Feb 20 04:51:41 localhost nova_compute[280804]: 2026-02-20 09:51:41.240 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:41 localhost nova_compute[280804]: 2026-02-20 09:51:41.241 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:41 localhost systemd[1]: tmp-crun.v7WANA.mount: Deactivated successfully. Feb 20 04:51:41 localhost dnsmasq[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/addn_hosts - 1 addresses Feb 20 04:51:41 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/host Feb 20 04:51:41 localhost podman[310611]: 2026-02-20 09:51:41.303121347 +0000 UTC m=+0.057081844 container kill e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:51:41 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/opts Feb 20 04:51:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v114: 177 pgs: 177 active+clean; 318 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 2.1 MiB/s wr, 392 op/s Feb 20 04:51:41 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:41.419 263745 INFO neutron.agent.dhcp.agent [None req-d260e262-be46-4b03-8329-fba66b5b3865 - - - - - -] DHCP configuration for ports {'7ae3e5b9-4eec-4a29-8b67-96090151ec43'} is completed#033[00m Feb 20 04:51:41 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:41.564 263745 INFO neutron.agent.dhcp.agent [None req-534e12c1-12b2-4e0e-a1d8-ea6728f86ae5 - - - - - -] DHCP configuration for ports {'03392061-da7b-4f83-bc5e-24e18d9d666e'} is completed#033[00m Feb 20 04:51:41 localhost nova_compute[280804]: 2026-02-20 09:51:41.582 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:51:41 localhost nova_compute[280804]: 2026-02-20 09:51:41.583 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] VM Started (Lifecycle Event)#033[00m Feb 20 04:51:41 localhost nova_compute[280804]: 2026-02-20 09:51:41.597 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:41 localhost nova_compute[280804]: 2026-02-20 09:51:41.666 280808 DEBUG nova.network.neutron [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Port 609a0699-8716-4bf8-9f50-bfeec5f65721 updated with migration profile {'migrating_to': 'np0005625202.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Feb 20 04:51:41 localhost nova_compute[280804]: 2026-02-20 09:51:41.667 280808 DEBUG nova.compute.manager [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp4xew7c85',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='90eb8d1f-8d13-4395-9d15-67fdaa60632d',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Feb 20 04:51:41 localhost sshd[310684]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:51:41 localhost systemd-logind[760]: New session 75 of user nova. Feb 20 04:51:41 localhost systemd[1]: Started Session 75 of User nova. Feb 20 04:51:41 localhost nova_compute[280804]: 2026-02-20 09:51:41.996 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:51:42 localhost NetworkManager[5967]: [1771581102.1315] manager: (tap609a0699-87): new Tun device (/org/freedesktop/NetworkManager/Devices/20) Feb 20 04:51:42 localhost kernel: device tap609a0699-87 entered promiscuous mode Feb 20 04:51:42 localhost nova_compute[280804]: 2026-02-20 09:51:42.136 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:42 localhost ovn_controller[155916]: 2026-02-20T09:51:42Z|00068|binding|INFO|Claiming lport 609a0699-8716-4bf8-9f50-bfeec5f65721 for this additional chassis. Feb 20 04:51:42 localhost ovn_controller[155916]: 2026-02-20T09:51:42Z|00069|binding|INFO|609a0699-8716-4bf8-9f50-bfeec5f65721: Claiming fa:16:3e:c0:a3:f9 10.100.0.12 Feb 20 04:51:42 localhost ovn_controller[155916]: 2026-02-20T09:51:42Z|00070|binding|INFO|Claiming lport ce4822a0-5e7a-4c40-9856-6c8879a12ac7 for this additional chassis. Feb 20 04:51:42 localhost ovn_controller[155916]: 2026-02-20T09:51:42Z|00071|binding|INFO|ce4822a0-5e7a-4c40-9856-6c8879a12ac7: Claiming fa:16:3e:ef:22:88 19.80.0.55 Feb 20 04:51:42 localhost NetworkManager[5967]: [1771581102.1440] device (tap609a0699-87): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 20 04:51:42 localhost NetworkManager[5967]: [1771581102.1444] device (tap609a0699-87): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Feb 20 04:51:42 localhost systemd-machined[205856]: New machine qemu-4-instance-00000008. Feb 20 04:51:42 localhost nova_compute[280804]: 2026-02-20 09:51:42.174 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:42 localhost ovn_controller[155916]: 2026-02-20T09:51:42Z|00072|binding|INFO|Setting lport 609a0699-8716-4bf8-9f50-bfeec5f65721 ovn-installed in OVS Feb 20 04:51:42 localhost nova_compute[280804]: 2026-02-20 09:51:42.179 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:42 localhost systemd[1]: Started Virtual Machine qemu-4-instance-00000008. Feb 20 04:51:42 localhost systemd[1]: tmp-crun.Xpd1NG.mount: Deactivated successfully. Feb 20 04:51:42 localhost podman[310692]: 2026-02-20 09:51:42.202834041 +0000 UTC m=+0.095385233 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:51:42 localhost podman[310692]: 2026-02-20 09:51:42.244798729 +0000 UTC m=+0.137349911 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:51:42 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:51:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:51:42 localhost nova_compute[280804]: 2026-02-20 09:51:42.547 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:51:42 localhost nova_compute[280804]: 2026-02-20 09:51:42.547 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] VM Started (Lifecycle Event)#033[00m Feb 20 04:51:42 localhost nova_compute[280804]: 2026-02-20 09:51:42.573 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:42 localhost nova_compute[280804]: 2026-02-20 09:51:42.976 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:42 localhost sshd[310776]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:51:43 localhost nova_compute[280804]: 2026-02-20 09:51:43.009 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:51:43 localhost nova_compute[280804]: 2026-02-20 09:51:43.009 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] VM Resumed (Lifecycle Event)#033[00m Feb 20 04:51:43 localhost nova_compute[280804]: 2026-02-20 09:51:43.189 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:43 localhost nova_compute[280804]: 2026-02-20 09:51:43.194 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 20 04:51:43 localhost nova_compute[280804]: 2026-02-20 09:51:43.210 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] During the sync_power process the instance has moved from host np0005625203.localdomain to host np0005625202.localdomain#033[00m Feb 20 04:51:43 localhost systemd[1]: session-73.scope: Deactivated successfully. Feb 20 04:51:43 localhost systemd-logind[760]: Session 73 logged out. Waiting for processes to exit. Feb 20 04:51:43 localhost systemd-logind[760]: Removed session 73. Feb 20 04:51:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v115: 177 pgs: 177 active+clean; 318 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 1.9 MiB/s wr, 348 op/s Feb 20 04:51:43 localhost nova_compute[280804]: 2026-02-20 09:51:43.472 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:51:43 localhost nova_compute[280804]: 2026-02-20 09:51:43.473 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] VM Resumed (Lifecycle Event)#033[00m Feb 20 04:51:43 localhost nova_compute[280804]: 2026-02-20 09:51:43.499 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:43 localhost nova_compute[280804]: 2026-02-20 09:51:43.502 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 20 04:51:43 localhost nova_compute[280804]: 2026-02-20 09:51:43.523 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] During the sync_power process the instance has moved from host np0005625204.localdomain to host np0005625202.localdomain#033[00m Feb 20 04:51:43 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:43.571 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:40Z, description=, device_id=b8b60d70-ea3f-49b3-b747-8e92d7d324e7, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=03392061-da7b-4f83-bc5e-24e18d9d666e, ip_allocation=immediate, mac_address=fa:16:3e:02:01:89, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:51:31Z, description=, dns_domain=, id=9ac533b5-90af-4cbe-be32-55de197d993c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-596806407-network, port_security_enabled=True, project_id=f299da1b635f4dafbe62328983ad1fae, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48730, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=606, status=ACTIVE, subnets=['12a57feb-547d-47f0-b3aa-28f3df8f6f52'], tags=[], tenant_id=f299da1b635f4dafbe62328983ad1fae, updated_at=2026-02-20T09:51:33Z, vlan_transparent=None, network_id=9ac533b5-90af-4cbe-be32-55de197d993c, port_security_enabled=False, project_id=f299da1b635f4dafbe62328983ad1fae, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=648, status=DOWN, tags=[], tenant_id=f299da1b635f4dafbe62328983ad1fae, updated_at=2026-02-20T09:51:40Z on network 9ac533b5-90af-4cbe-be32-55de197d993c#033[00m Feb 20 04:51:43 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:43.727 2 INFO neutron.agent.securitygroups_rpc [req-bc88a03e-b48b-4063-bf3f-e91bcc37d72d req-9e63afcc-d40a-4b2c-a3aa-f230d65e4db2 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['d4aeef42-5959-493a-9cfc-ec0d9adb0b00']#033[00m Feb 20 04:51:43 localhost systemd[1]: session-75.scope: Deactivated successfully. Feb 20 04:51:43 localhost systemd-logind[760]: Session 75 logged out. Waiting for processes to exit. Feb 20 04:51:43 localhost systemd-logind[760]: Removed session 75. Feb 20 04:51:43 localhost dnsmasq[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/addn_hosts - 1 addresses Feb 20 04:51:43 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/host Feb 20 04:51:43 localhost podman[310795]: 2026-02-20 09:51:43.777767336 +0000 UTC m=+0.047789175 container kill e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 20 04:51:43 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/opts Feb 20 04:51:43 localhost nova_compute[280804]: 2026-02-20 09:51:43.902 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:43 localhost nova_compute[280804]: 2026-02-20 09:51:43.943 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:43 localhost ovn_controller[155916]: 2026-02-20T09:51:43Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:f6:87 10.100.0.6 Feb 20 04:51:43 localhost ovn_controller[155916]: 2026-02-20T09:51:43Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:f6:87 10.100.0.6 Feb 20 04:51:44 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:44.035 263745 INFO neutron.agent.dhcp.agent [None req-067a9ea0-9384-42e9-9fa5-6c1ce68bb842 - - - - - -] DHCP configuration for ports {'03392061-da7b-4f83-bc5e-24e18d9d666e'} is completed#033[00m Feb 20 04:51:44 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:44.704 2 INFO neutron.agent.securitygroups_rpc [req-d0af9ee5-c34c-498a-a79b-d6b681e80e4a req-6395f2ff-4ffd-4bf4-8fd0-e88e3c18ce7a 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['f1d2b747-b5b9-4577-9543-577b07c94aaa']#033[00m Feb 20 04:51:44 localhost ovn_controller[155916]: 2026-02-20T09:51:44Z|00073|binding|INFO|Claiming lport 89472e1e-6ca6-404e-8ec3-7651099fb248 for this chassis. Feb 20 04:51:44 localhost ovn_controller[155916]: 2026-02-20T09:51:44Z|00074|binding|INFO|89472e1e-6ca6-404e-8ec3-7651099fb248: Claiming fa:16:3e:00:f6:87 10.100.0.6 Feb 20 04:51:44 localhost ovn_controller[155916]: 2026-02-20T09:51:44Z|00075|binding|INFO|Claiming lport 533acac2-f7ea-4ecb-b927-c6780a91a0a2 for this chassis. Feb 20 04:51:44 localhost ovn_controller[155916]: 2026-02-20T09:51:44Z|00076|binding|INFO|533acac2-f7ea-4ecb-b927-c6780a91a0a2: Claiming fa:16:3e:94:06:ec 19.80.0.250 Feb 20 04:51:44 localhost ovn_controller[155916]: 2026-02-20T09:51:44Z|00077|binding|INFO|Setting lport 89472e1e-6ca6-404e-8ec3-7651099fb248 up in Southbound Feb 20 04:51:44 localhost ovn_controller[155916]: 2026-02-20T09:51:44Z|00078|binding|INFO|Setting lport 533acac2-f7ea-4ecb-b927-c6780a91a0a2 up in Southbound Feb 20 04:51:44 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:44.924 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:f6:87 10.100.0.6'], port_security=['fa:16:3e:00:f6:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-254587356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e6ab74b8-b495-4363-8d40-2356596c895c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-254587356', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '9', 'neutron:security_group_ids': '07d2fe18-fbbf-4547-931e-bb55f378bade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30955323-f649-483f-8215-a2b2b9707d5e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=89472e1e-6ca6-404e-8ec3-7651099fb248) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:44 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:44.928 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:06:ec 19.80.0.250'], port_security=['fa:16:3e:94:06:ec 19.80.0.250'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['89472e1e-6ca6-404e-8ec3-7651099fb248'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1194540045', 'neutron:cidrs': '19.80.0.250/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5faf2589-b0d7-486e-a56b-df0762273b7b', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1194540045', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '07d2fe18-fbbf-4547-931e-bb55f378bade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=b8083fe1-977d-4fae-94f3-b03c7096c58a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=533acac2-f7ea-4ecb-b927-c6780a91a0a2) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:44 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:44.931 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 89472e1e-6ca6-404e-8ec3-7651099fb248 in datapath 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 bound to our chassis#033[00m Feb 20 04:51:44 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:44.935 161766 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192#033[00m Feb 20 04:51:44 localhost ovn_controller[155916]: 2026-02-20T09:51:44Z|00079|binding|INFO|Claiming lport 609a0699-8716-4bf8-9f50-bfeec5f65721 for this chassis. Feb 20 04:51:44 localhost ovn_controller[155916]: 2026-02-20T09:51:44Z|00080|binding|INFO|609a0699-8716-4bf8-9f50-bfeec5f65721: Claiming fa:16:3e:c0:a3:f9 10.100.0.12 Feb 20 04:51:44 localhost ovn_controller[155916]: 2026-02-20T09:51:44Z|00081|binding|INFO|Claiming lport ce4822a0-5e7a-4c40-9856-6c8879a12ac7 for this chassis. Feb 20 04:51:44 localhost ovn_controller[155916]: 2026-02-20T09:51:44Z|00082|binding|INFO|ce4822a0-5e7a-4c40-9856-6c8879a12ac7: Claiming fa:16:3e:ef:22:88 19.80.0.55 Feb 20 04:51:44 localhost ovn_controller[155916]: 2026-02-20T09:51:44Z|00083|binding|INFO|Setting lport 609a0699-8716-4bf8-9f50-bfeec5f65721 up in Southbound Feb 20 04:51:44 localhost ovn_controller[155916]: 2026-02-20T09:51:44Z|00084|binding|INFO|Setting lport ce4822a0-5e7a-4c40-9856-6c8879a12ac7 up in Southbound Feb 20 04:51:44 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:44.986 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:a3:f9 10.100.0.12'], port_security=['fa:16:3e:c0:a3:f9 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-420346976', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '90eb8d1f-8d13-4395-9d15-67fdaa60632d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-420346976', 'neutron:project_id': 'e704aae5b1ba49d59262f9aa0c366fb4', 'neutron:revision_number': '9', 'neutron:security_group_ids': '6a912071-fd9c-4d5f-8453-7f993db3506d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625204.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad9ac3f8-d9ff-4a1d-8092-e57f93de7b33, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=609a0699-8716-4bf8-9f50-bfeec5f65721) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:44 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:44.989 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:22:88 19.80.0.55'], port_security=['fa:16:3e:ef:22:88 19.80.0.55'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['609a0699-8716-4bf8-9f50-bfeec5f65721'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-288633192', 'neutron:cidrs': '19.80.0.55/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9021dc49-7e01-42e7-8f32-572dec89afcc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-288633192', 'neutron:project_id': 'e704aae5b1ba49d59262f9aa0c366fb4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '6a912071-fd9c-4d5f-8453-7f993db3506d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=7655fb8f-4890-4990-9fdf-4d25849654f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=ce4822a0-5e7a-4c40-9856-6c8879a12ac7) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:45 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:45.108 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 req-88e52b30-5bc3-4f66-87e9-64613da47436 9578855b599e48b3a4d4bbfe4eebca75 f6685cdb0ff24cddaeb987c63c89eafb - - default default] This port is not SRIOV, skip binding for port 609a0699-8716-4bf8-9f50-bfeec5f65721.#033[00m Feb 20 04:51:45 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:45.147 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-15f83be2-76c1-4614-88f6-538fe2682745 req-43726e83-1e68-475f-a69b-88671cf5fcec 9578855b599e48b3a4d4bbfe4eebca75 f6685cdb0ff24cddaeb987c63c89eafb - - default default] This port is not SRIOV, skip binding for port 89472e1e-6ca6-404e-8ec3-7651099fb248.#033[00m Feb 20 04:51:45 localhost nova_compute[280804]: 2026-02-20 09:51:45.215 280808 INFO nova.compute.manager [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Post operation of migration started#033[00m Feb 20 04:51:45 localhost nova_compute[280804]: 2026-02-20 09:51:45.254 280808 INFO nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Post operation of migration started#033[00m Feb 20 04:51:45 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:45.376 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[315627d7-49ca-421a-b906-a125ea279648]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:45 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:45.378 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82c5dcbb-e1 in ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 20 04:51:45 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:45.381 263903 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82c5dcbb-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 20 04:51:45 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:45.381 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[15984980-2613-4b82-95e0-154a5c406e20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:45 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:45.382 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[86876878-008b-4a7a-9eb4-a8b8e358adc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:45 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:45.400 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[4887061b-b0c4-4919-895a-0b158aef641f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:45 localhost nova_compute[280804]: 2026-02-20 09:51:45.407 280808 DEBUG oslo_concurrency.lockutils [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Acquiring lock "refresh_cache-90eb8d1f-8d13-4395-9d15-67fdaa60632d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:51:45 localhost nova_compute[280804]: 2026-02-20 09:51:45.408 280808 DEBUG oslo_concurrency.lockutils [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Acquired lock "refresh_cache-90eb8d1f-8d13-4395-9d15-67fdaa60632d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:51:45 localhost nova_compute[280804]: 2026-02-20 09:51:45.408 280808 DEBUG nova.network.neutron [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 20 04:51:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v116: 177 pgs: 177 active+clean; 360 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 4.8 MiB/s wr, 155 op/s Feb 20 04:51:45 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:45.426 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[bb283ba5-d9df-4de2-90ad-f3d5acc3a1f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:45 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:45.428 161766 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp6sijnjd9/privsep.sock']#033[00m Feb 20 04:51:45 localhost nova_compute[280804]: 2026-02-20 09:51:45.489 280808 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquiring lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:51:45 localhost nova_compute[280804]: 2026-02-20 09:51:45.490 280808 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquired lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:51:45 localhost nova_compute[280804]: 2026-02-20 09:51:45.490 280808 DEBUG nova.network.neutron [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 20 04:51:45 localhost nova_compute[280804]: 2026-02-20 09:51:45.547 280808 DEBUG nova.virt.libvirt.driver [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m Feb 20 04:51:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:46.026 161766 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Feb 20 04:51:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:46.027 161766 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6sijnjd9/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Feb 20 04:51:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:45.921 310825 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 20 04:51:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:45.924 310825 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 20 04:51:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:45.926 310825 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Feb 20 04:51:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:45.926 310825 INFO oslo.privsep.daemon [-] privsep daemon running as pid 310825#033[00m Feb 20 04:51:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:46.032 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[b73633e6-892c-4c68-b2d2-559ff15934d0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:46 localhost podman[241347]: time="2026-02-20T09:51:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:51:46 localhost podman[241347]: @ - - [20/Feb/2026:09:51:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 161364 "" "Go-http-client/1.1" Feb 20 04:51:46 localhost podman[241347]: @ - - [20/Feb/2026:09:51:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19727 "" "Go-http-client/1.1" Feb 20 04:51:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:46.485 310825 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:46.485 310825 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:46.485 310825 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:46 localhost nova_compute[280804]: 2026-02-20 09:51:46.579 280808 DEBUG nova.network.neutron [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Updating instance_info_cache with network_info: [{"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:51:46 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:46.587 2 INFO neutron.agent.securitygroups_rpc [req-d0a4da6f-de87-4709-b506-6d507f2fa68b req-a4293e8b-cdcb-4f0f-b9af-77766c0f126a 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['b0641abe-7ec2-4391-9e24-125339c7b7ee']#033[00m Feb 20 04:51:46 localhost nova_compute[280804]: 2026-02-20 09:51:46.600 280808 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Releasing lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:51:46 localhost nova_compute[280804]: 2026-02-20 09:51:46.615 280808 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:46 localhost nova_compute[280804]: 2026-02-20 09:51:46.616 280808 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:46 localhost nova_compute[280804]: 2026-02-20 09:51:46.616 280808 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:46 localhost nova_compute[280804]: 2026-02-20 09:51:46.620 280808 INFO nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Feb 20 04:51:46 localhost journal[229026]: Domain id=3 name='instance-00000007' uuid=e6ab74b8-b495-4363-8d40-2356596c895c is tainted: custom-monitor Feb 20 04:51:46 localhost ovn_controller[155916]: 2026-02-20T09:51:46Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:a3:f9 10.100.0.12 Feb 20 04:51:46 localhost ovn_controller[155916]: 2026-02-20T09:51:46Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:a3:f9 10.100.0.12 Feb 20 04:51:46 localhost nova_compute[280804]: 2026-02-20 09:51:46.833 280808 DEBUG nova.network.neutron [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Updating instance_info_cache with network_info: [{"id": "609a0699-8716-4bf8-9f50-bfeec5f65721", "address": "fa:16:3e:c0:a3:f9", "network": {"id": "51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0", "bridge": "br-int", "label": "tempest-LiveMigrationTest-982155183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "e704aae5b1ba49d59262f9aa0c366fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609a0699-87", "ovs_interfaceid": "609a0699-8716-4bf8-9f50-bfeec5f65721", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:51:46 localhost nova_compute[280804]: 2026-02-20 09:51:46.854 280808 DEBUG oslo_concurrency.lockutils [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Releasing lock "refresh_cache-90eb8d1f-8d13-4395-9d15-67fdaa60632d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:51:46 localhost nova_compute[280804]: 2026-02-20 09:51:46.871 280808 DEBUG oslo_concurrency.lockutils [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:46 localhost nova_compute[280804]: 2026-02-20 09:51:46.871 280808 DEBUG oslo_concurrency.lockutils [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:46 localhost nova_compute[280804]: 2026-02-20 09:51:46.872 280808 DEBUG oslo_concurrency.lockutils [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:46 localhost nova_compute[280804]: 2026-02-20 09:51:46.876 280808 INFO nova.virt.libvirt.driver [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Feb 20 04:51:46 localhost journal[229026]: Domain id=4 name='instance-00000008' uuid=90eb8d1f-8d13-4395-9d15-67fdaa60632d is tainted: custom-monitor Feb 20 04:51:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:46.954 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[373a9eab-cdad-4d66-95cd-9de15d241ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:46 localhost NetworkManager[5967]: [1771581106.9754] manager: (tap82c5dcbb-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/21) Feb 20 04:51:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:46.974 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[54279938-f9e1-46c1-8b97-807df1b45668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:46 localhost systemd-udevd[310835]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.003 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[dee47366-b396-4d66-983f-f54f6ce7fced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.006 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a51a6d-32e7-4c59-a139-e1e87523c927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap82c5dcbb-e1: link becomes ready Feb 20 04:51:47 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap82c5dcbb-e0: link becomes ready Feb 20 04:51:47 localhost NetworkManager[5967]: [1771581107.0234] device (tap82c5dcbb-e0): carrier: link connected Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.027 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[62adab73-75d1-43f0-b54a-c57197650285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.037 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[7d141b69-c9a9-437e-8029-9e7bf28f8453]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82c5dcbb-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:4b:80:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165320, 'reachable_time': 41809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310855, 'error': None, 'target': 'ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.049 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[47d4a1d4-c1f9-4867-8610-441558067b70]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:80dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1165320, 'tstamp': 1165320}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310856, 'error': None, 'target': 'ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.058 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd1d1d2-37bb-449c-8cdd-a8e9d70814a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82c5dcbb-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:4b:80:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165320, 'reachable_time': 41809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310857, 'error': None, 'target': 'ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.077 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd44918-5f2f-4aca-9b02-8523a537d670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.113 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[f140b2e2-71b6-450b-a690-7f6db5298074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.115 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82c5dcbb-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.116 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.116 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82c5dcbb-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:47 localhost kernel: device tap82c5dcbb-e0 entered promiscuous mode Feb 20 04:51:47 localhost nova_compute[280804]: 2026-02-20 09:51:47.120 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:47 localhost nova_compute[280804]: 2026-02-20 09:51:47.124 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.126 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82c5dcbb-e0, col_values=(('external_ids', {'iface-id': 'b6bbb6c0-ef13-4100-9a72-6d01c8b15be6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:47 localhost ovn_controller[155916]: 2026-02-20T09:51:47Z|00085|binding|INFO|Releasing lport b6bbb6c0-ef13-4100-9a72-6d01c8b15be6 from this chassis (sb_readonly=0) Feb 20 04:51:47 localhost nova_compute[280804]: 2026-02-20 09:51:47.128 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:47 localhost nova_compute[280804]: 2026-02-20 09:51:47.138 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.140 161766 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.141 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[5f621b80-0a7e-4eff-abb0-fcf2c962241c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.143 161766 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: global Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: log /dev/log local0 debug Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: log-tag haproxy-metadata-proxy-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: user root Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: group root Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: maxconn 1024 Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: pidfile /var/lib/neutron/external/pids/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192.pid.haproxy Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: daemon Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: defaults Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: log global Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: mode http Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: option httplog Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: option dontlognull Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: option http-server-close Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: option forwardfor Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: retries 3 Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: timeout http-request 30s Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: timeout connect 30s Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: timeout client 32s Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: timeout server 32s Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: timeout http-keep-alive 30s Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: listen listener Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: bind 169.254.169.254:80 Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: server metadata /var/lib/neutron/metadata_proxy Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: http-request add-header X-OVN-Network-ID 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.144 161766 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'env', 'PROCESS_TAG=haproxy-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 20 04:51:47 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:47.323 2 INFO neutron.agent.securitygroups_rpc [req-bca898f3-80d6-4116-8407-48ccb221c91a req-faa58b52-ff7a-4794-95e2-54e55cdad610 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['57ce2b3f-bfcc-424f-be8f-efa4d8d83e67']#033[00m Feb 20 04:51:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v117: 177 pgs: 177 active+clean; 360 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 4.8 MiB/s wr, 155 op/s Feb 20 04:51:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:51:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:51:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:51:47 localhost podman[310890]: Feb 20 04:51:47 localhost podman[310890]: 2026-02-20 09:51:47.624200783 +0000 UTC m=+0.093249876 container create d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 20 04:51:47 localhost nova_compute[280804]: 2026-02-20 09:51:47.638 280808 INFO nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Feb 20 04:51:47 localhost systemd[1]: Started libpod-conmon-d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161.scope. Feb 20 04:51:47 localhost podman[310890]: 2026-02-20 09:51:47.579220934 +0000 UTC m=+0.048270087 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 20 04:51:47 localhost systemd[1]: Started libcrun container. Feb 20 04:51:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c19ad39752ca9a180ebdec81c121ac34bc744103f0f4b89dcea902c08c74c80/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:51:47 localhost podman[310890]: 2026-02-20 09:51:47.697250415 +0000 UTC m=+0.166299478 container init d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:51:47 localhost podman[310890]: 2026-02-20 09:51:47.706494383 +0000 UTC m=+0.175543446 container start d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:51:47 localhost neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[310904]: [NOTICE] (310908) : New worker (310910) forked Feb 20 04:51:47 localhost neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[310904]: [NOTICE] (310908) : Loading success. Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.758 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 533acac2-f7ea-4ecb-b927-c6780a91a0a2 in datapath 5faf2589-b0d7-486e-a56b-df0762273b7b bound to our chassis#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.761 161766 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5faf2589-b0d7-486e-a56b-df0762273b7b#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.768 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[54198c29-e30f-42a2-8f78-0638811c9a5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.770 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5faf2589-b1 in ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.771 263903 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5faf2589-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.771 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[05823bdf-4068-4964-b99e-54622d506871]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.772 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6f6c5a-5d5e-4778-8b5a-5f0fe28239e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.796 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[5704c83e-cdcd-4c01-b51c-dab4af2cbf54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.811 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4d46f8-1654-446d-8732-fb2b65d147d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.831 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f75423-9ccf-4a7b-8697-fe00d159d9ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost NetworkManager[5967]: [1771581107.8398] manager: (tap5faf2589-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/22) Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.838 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[37f1f9a5-2e5d-408f-b1b4-1c6096a6f857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost systemd-udevd[310840]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.861 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec0058b-f3f7-4e7b-b30f-230f6e78d810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.863 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[21a27539-8df5-40c1-ab99-d2b3d67aa4e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Deactivated successfully. Feb 20 04:51:47 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Consumed 12.412s CPU time. Feb 20 04:51:47 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap5faf2589-b0: link becomes ready Feb 20 04:51:47 localhost NetworkManager[5967]: [1771581107.8795] device (tap5faf2589-b0): carrier: link connected Feb 20 04:51:47 localhost systemd-machined[205856]: Machine qemu-2-instance-00000006 terminated. Feb 20 04:51:47 localhost nova_compute[280804]: 2026-02-20 09:51:47.883 280808 INFO nova.virt.libvirt.driver [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.884 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[55d19ccc-2fea-4c1e-9944-d701065d54d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.904 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb768dd-5bcd-4968-9474-75c758e20d85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5faf2589-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:ca:19:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165406, 'reachable_time': 42377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310933, 'error': None, 'target': 'ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.917 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[11a29206-07ce-477a-92b4-cf9cb1d1ff46]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:1953'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1165406, 'tstamp': 1165406}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310934, 'error': None, 'target': 'ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.933 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[05ba4bf1-73b7-4897-9cbe-a8b6ab4466be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5faf2589-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:ca:19:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165406, 'reachable_time': 42377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310935, 'error': None, 'target': 'ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:47.956 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[b46164cc-a98a-425c-8c9d-20a4d893617e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.000 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[9658f219-c19c-494e-9934-0cc78ac92c26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.001 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5faf2589-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.002 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.003 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5faf2589-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:48 localhost kernel: device tap5faf2589-b0 entered promiscuous mode Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.005 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.009 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5faf2589-b0, col_values=(('external_ids', {'iface-id': '3bb75901-4106-4229-b593-83c4bfd80b13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.007 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.011 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:48 localhost ovn_controller[155916]: 2026-02-20T09:51:48Z|00086|binding|INFO|Releasing lport 3bb75901-4106-4229-b593-83c4bfd80b13 from this chassis (sb_readonly=0) Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.023 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.025 161766 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5faf2589-b0d7-486e-a56b-df0762273b7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5faf2589-b0d7-486e-a56b-df0762273b7b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.026 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[4a203789-f22c-41fc-994d-b50b6938da56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.027 161766 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: global Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: log /dev/log local0 debug Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: log-tag haproxy-metadata-proxy-5faf2589-b0d7-486e-a56b-df0762273b7b Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: user root Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: group root Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: maxconn 1024 Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: pidfile /var/lib/neutron/external/pids/5faf2589-b0d7-486e-a56b-df0762273b7b.pid.haproxy Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: daemon Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: defaults Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: log global Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: mode http Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: option httplog Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: option dontlognull Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: option http-server-close Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: option forwardfor Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: retries 3 Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: timeout http-request 30s Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: timeout connect 30s Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: timeout client 32s Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: timeout server 32s Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: timeout http-keep-alive 30s Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: listen listener Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: bind 169.254.169.254:80 Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: server metadata /var/lib/neutron/metadata_proxy Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: http-request add-header X-OVN-Network-ID 5faf2589-b0d7-486e-a56b-df0762273b7b Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.028 161766 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b', 'env', 'PROCESS_TAG=haproxy-5faf2589-b0d7-486e-a56b-df0762273b7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5faf2589-b0d7-486e-a56b-df0762273b7b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 20 04:51:48 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:48.230 2 INFO neutron.agent.securitygroups_rpc [req-9dee97d4-d9a8-4ee1-93af-ecec75edb6d8 req-b55f0bec-8773-478c-b205-7d4f6dd0e50e 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['35812fcc-3257-4b87-b011-2e502647727b']#033[00m Feb 20 04:51:48 localhost podman[310969]: Feb 20 04:51:48 localhost podman[310969]: 2026-02-20 09:51:48.408850125 +0000 UTC m=+0.087330558 container create e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:51:48 localhost systemd[1]: Started libpod-conmon-e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24.scope. Feb 20 04:51:48 localhost systemd[1]: Started libcrun container. Feb 20 04:51:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25874cce964c3a6285cf85fa495d042d9286051c150a5c8435d3247b33003a9f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:51:48 localhost podman[310969]: 2026-02-20 09:51:48.365656373 +0000 UTC m=+0.044136836 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 20 04:51:48 localhost podman[310969]: 2026-02-20 09:51:48.470319616 +0000 UTC m=+0.148800039 container init e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 20 04:51:48 localhost podman[310969]: 2026-02-20 09:51:48.475814484 +0000 UTC m=+0.154294907 container start e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:51:48 localhost neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[310983]: [NOTICE] (310988) : New worker (310990) forked Feb 20 04:51:48 localhost neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[310983]: [NOTICE] (310988) : Loading success. Feb 20 04:51:48 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:48.495 2 INFO neutron.agent.securitygroups_rpc [req-3e7d9994-1b77-4e20-ab05-14f19dff3953 req-ef6c6054-1e67-499f-b1bf-b8ae592974a9 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['35812fcc-3257-4b87-b011-2e502647727b']#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.545 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 609a0699-8716-4bf8-9f50-bfeec5f65721 in datapath 51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0 unbound from our chassis#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.550 161766 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.559 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[df900b4a-329c-4e09-b50d-14460d9ba8d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.560 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap51f8ae9c-11 in ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.563 263903 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap51f8ae9c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.563 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7e19b3-a256-4a7c-bbc1-d3a6d65a1080]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.564 280808 INFO nova.virt.libvirt.driver [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance shutdown successfully after 13 seconds.#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.564 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb3affc-d22d-4940-8b48-4248b2511ed0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.569 280808 INFO nova.virt.libvirt.driver [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance destroyed successfully.#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.569 280808 DEBUG nova.objects.instance [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lazy-loading 'numa_topology' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.584 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[4f89461e-2b3c-471e-a42c-3b7fda43ff24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.596 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[f8adc3b7-12f2-4de9-9eba-e6886af51027]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.618 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[5a00835a-f7db-40b2-a0a3-db7833fecdc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost NetworkManager[5967]: [1771581108.6241] manager: (tap51f8ae9c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/23) Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.626 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4c8d1c-2719-4dca-a3ba-407abe8485a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost systemd[1]: tmp-crun.cBQ5b9.mount: Deactivated successfully. Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.645 280808 INFO nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.647 280808 INFO nova.virt.libvirt.driver [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Beginning cold snapshot process#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.653 280808 DEBUG nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.663 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[85508d24-a4ec-4fe8-9cb3-b17f452e83a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.669 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[23f017dc-68b8-4f4b-9ed2-37650de55c4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.671 280808 DEBUG nova.objects.instance [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Feb 20 04:51:48 localhost NetworkManager[5967]: [1771581108.6933] device (tap51f8ae9c-10): carrier: link connected Feb 20 04:51:48 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap51f8ae9c-11: link becomes ready Feb 20 04:51:48 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap51f8ae9c-10: link becomes ready Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.697 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[af4dda9f-a767-4ed4-979f-1251948c6a07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.715 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[daaf37fd-6482-4386-be39-f22539890273]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51f8ae9c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:63:f7:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165487, 'reachable_time': 28379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311010, 'error': None, 'target': 'ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.735 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[4c543b67-a38d-4dd1-a4e1-97ddb2bc21b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:f7d8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1165487, 'tstamp': 1165487}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311011, 'error': None, 'target': 'ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.752 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[888e4fbf-b62e-414c-8357-e0bb36a2fa5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51f8ae9c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:63:f7:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165487, 'reachable_time': 28379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311012, 'error': None, 'target': 'ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.785 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[d906ffc9-f8c1-4e43-997f-5a0e2e94b053]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.832 280808 DEBUG nova.virt.libvirt.imagebackend [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] No parent info for 06bd71fd-c415-45d9-b669-46209b7ca2f4; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.847 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe63cc8-b45f-4e2c-9ba8-62a148481e10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.849 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51f8ae9c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.850 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.850 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51f8ae9c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:48 localhost kernel: device tap51f8ae9c-10 entered promiscuous mode Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.857 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51f8ae9c-10, col_values=(('external_ids', {'iface-id': '2b93bbc2-5aeb-49cc-b610-6f4f7708d346'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:48 localhost ovn_controller[155916]: 2026-02-20T09:51:48Z|00087|binding|INFO|Releasing lport 2b93bbc2-5aeb-49cc-b610-6f4f7708d346 from this chassis (sb_readonly=0) Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.862 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.871 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.872 161766 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.873 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc249ad-b519-4758-b4da-2738acbe127a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.874 161766 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: global Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: log /dev/log local0 debug Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: log-tag haproxy-metadata-proxy-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0 Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: user root Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: group root Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: maxconn 1024 Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: pidfile /var/lib/neutron/external/pids/51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0.pid.haproxy Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: daemon Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: defaults Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: log global Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: mode http Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: option httplog Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: option dontlognull Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: option http-server-close Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: option forwardfor Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: retries 3 Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: timeout http-request 30s Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: timeout connect 30s Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: timeout client 32s Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: timeout server 32s Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: timeout http-keep-alive 30s Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: listen listener Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: bind 169.254.169.254:80 Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: server metadata /var/lib/neutron/metadata_proxy Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: http-request add-header X-OVN-Network-ID 51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0 Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 20 04:51:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:48.875 161766 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0', 'env', 'PROCESS_TAG=haproxy-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.878 280808 DEBUG nova.storage.rbd_utils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] creating snapshot(f60618948c814792a5527745cb5e98af) on rbd image(43720f70-168d-461a-8b52-ba71de6033a0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.946 280808 INFO nova.virt.libvirt.driver [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.948 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.950 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.958 280808 DEBUG nova.compute.manager [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:48 localhost nova_compute[280804]: 2026-02-20 09:51:48.988 280808 DEBUG nova.objects.instance [None req-33d9284d-1dc9-45ce-9d7a-b8cac683d1f9 fc37a88bcbe34be0bfdc7c0bd2787c78 65685fb154414740b6b5e1276111b8bb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Feb 20 04:51:49 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:49.170 2 INFO neutron.agent.securitygroups_rpc [req-fdb7e361-ff4f-4f47-a1a0-e5e8ae6f1fbe req-da6cfbf4-9203-474f-ad26-64056962735b 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['35812fcc-3257-4b87-b011-2e502647727b']#033[00m Feb 20 04:51:49 localhost sshd[311089]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:51:49 localhost podman[311095]: Feb 20 04:51:49 localhost podman[311095]: 2026-02-20 09:51:49.320743565 +0000 UTC m=+0.090287457 container create 03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 20 04:51:49 localhost systemd[1]: Started libpod-conmon-03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5.scope. Feb 20 04:51:49 localhost podman[311095]: 2026-02-20 09:51:49.277488043 +0000 UTC m=+0.047031945 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 20 04:51:49 localhost systemd[1]: Started libcrun container. Feb 20 04:51:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d40c29d76aed339fa3bf205820c8601934086b8926220335e01ab806b8a045/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:51:49 localhost podman[311095]: 2026-02-20 09:51:49.399000028 +0000 UTC m=+0.168543930 container init 03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:51:49 localhost podman[311095]: 2026-02-20 09:51:49.407871477 +0000 UTC m=+0.177415369 container start 03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:51:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 177 active+clean; 380 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 199 op/s Feb 20 04:51:49 localhost neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0[311109]: [NOTICE] (311113) : New worker (311115) forked Feb 20 04:51:49 localhost neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0[311109]: [NOTICE] (311113) : Loading success. Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.466 161766 INFO neutron.agent.ovn.metadata.agent [-] Port ce4822a0-5e7a-4c40-9856-6c8879a12ac7 in datapath 9021dc49-7e01-42e7-8f32-572dec89afcc unbound from our chassis#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.471 161766 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9021dc49-7e01-42e7-8f32-572dec89afcc#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.479 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a53869-e869-460f-b5ab-22c965a9d26b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.480 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9021dc49-71 in ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.482 263903 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9021dc49-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.482 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed7eb12-2e29-4f68-b599-4a8792ab819e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.484 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[a96454ec-dd0d-4809-8914-d0e84e74ed0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.499 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[86eeefa5-015a-4be6-b6f7-28df9b456e0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.508 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[1cddb0e1-61a7-4bb7-a47e-8fc91acb0ee0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.533 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4dc945-90b8-41a8-ac37-404abaa301d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.539 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[670dd39f-e264-4af3-a4f9-e596ff939494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost NetworkManager[5967]: [1771581109.5412] manager: (tap9021dc49-70): new Veth device (/org/freedesktop/NetworkManager/Devices/24) Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.574 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[deb9ab88-9d4f-400f-9555-f7c1853108fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.577 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[6fcffc91-e086-416d-90c5-5556f36d7efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap9021dc49-70: link becomes ready Feb 20 04:51:49 localhost NetworkManager[5967]: [1771581109.5993] device (tap9021dc49-70): carrier: link connected Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.604 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f17b29-16c3-48a1-a316-60ff60414066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.621 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[e26087a9-bd92-4a6d-b353-eed2a974a2bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9021dc49-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:cd:0a:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165578, 'reachable_time': 41887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311136, 'error': None, 'target': 'ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.637 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[7bcdea19-026a-4b53-b13f-f6bea7dce9ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:a45'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1165578, 'tstamp': 1165578}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311137, 'error': None, 'target': 'ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.651 280808 DEBUG oslo_concurrency.lockutils [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.651 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6d6b39-d6e0-4b74-bf3e-d144d63f52de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9021dc49-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:cd:0a:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165578, 'reachable_time': 41887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311138, 'error': None, 'target': 'ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.651 280808 DEBUG oslo_concurrency.lockutils [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.651 280808 DEBUG oslo_concurrency.lockutils [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.652 280808 DEBUG oslo_concurrency.lockutils [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.652 280808 DEBUG oslo_concurrency.lockutils [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.654 280808 INFO nova.compute.manager [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Terminating instance#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.655 280808 DEBUG nova.compute.manager [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.675 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf38642-596e-4838-9ba6-258715080ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost kernel: device tap89472e1e-6c left promiscuous mode Feb 20 04:51:49 localhost NetworkManager[5967]: [1771581109.7176] device (tap89472e1e-6c): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00088|binding|INFO|Releasing lport 89472e1e-6ca6-404e-8ec3-7651099fb248 from this chassis (sb_readonly=0) Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.723 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00089|binding|INFO|Setting lport 89472e1e-6ca6-404e-8ec3-7651099fb248 down in Southbound Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00090|binding|INFO|Releasing lport 533acac2-f7ea-4ecb-b927-c6780a91a0a2 from this chassis (sb_readonly=0) Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00091|binding|INFO|Setting lport 533acac2-f7ea-4ecb-b927-c6780a91a0a2 down in Southbound Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00092|binding|INFO|Removing iface tap89472e1e-6c ovn-installed in OVS Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.729 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.733 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[96278fa6-c5b6-40c7-98fe-7f11c17245f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00093|binding|INFO|Releasing lport 2b93bbc2-5aeb-49cc-b610-6f4f7708d346 from this chassis (sb_readonly=0) Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00094|binding|INFO|Releasing lport 3bb75901-4106-4229-b593-83c4bfd80b13 from this chassis (sb_readonly=0) Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.735 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9021dc49-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00095|binding|INFO|Releasing lport b6bbb6c0-ef13-4100-9a72-6d01c8b15be6 from this chassis (sb_readonly=0) Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.735 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.736 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9021dc49-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.739 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:f6:87 10.100.0.6'], port_security=['fa:16:3e:00:f6:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-254587356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e6ab74b8-b495-4363-8d40-2356596c895c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-254587356', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '11', 'neutron:security_group_ids': '07d2fe18-fbbf-4547-931e-bb55f378bade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30955323-f649-483f-8215-a2b2b9707d5e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=89472e1e-6ca6-404e-8ec3-7651099fb248) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.741 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.742 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:06:ec 19.80.0.250'], port_security=['fa:16:3e:94:06:ec 19.80.0.250'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['89472e1e-6ca6-404e-8ec3-7651099fb248'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1194540045', 'neutron:cidrs': '19.80.0.250/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5faf2589-b0d7-486e-a56b-df0762273b7b', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1194540045', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '07d2fe18-fbbf-4547-931e-bb55f378bade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=b8083fe1-977d-4fae-94f3-b03c7096c58a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=533acac2-f7ea-4ecb-b927-c6780a91a0a2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:49 localhost kernel: device tap9021dc49-70 entered promiscuous mode Feb 20 04:51:49 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully. Feb 20 04:51:49 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 2.669s CPU time. Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.759 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.761 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9021dc49-70, col_values=(('external_ids', {'iface-id': '8069ffae-e153-4a3e-ac83-1cd290da58a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:49 localhost systemd-machined[205856]: Machine qemu-3-instance-00000007 terminated. Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.764 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.770 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.771 161766 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9021dc49-7e01-42e7-8f32-572dec89afcc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9021dc49-7e01-42e7-8f32-572dec89afcc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.772 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[2802679d-ab22-46e0-8c30-7205aa5dd4c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.773 161766 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: global Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: log /dev/log local0 debug Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: log-tag haproxy-metadata-proxy-9021dc49-7e01-42e7-8f32-572dec89afcc Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: user root Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: group root Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: maxconn 1024 Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: pidfile /var/lib/neutron/external/pids/9021dc49-7e01-42e7-8f32-572dec89afcc.pid.haproxy Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: daemon Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: defaults Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: log global Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: mode http Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: option httplog Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: option dontlognull Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: option http-server-close Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: option forwardfor Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: retries 3 Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: timeout http-request 30s Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: timeout connect 30s Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: timeout client 32s Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: timeout server 32s Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: timeout http-keep-alive 30s Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: listen listener Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: bind 169.254.169.254:80 Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: server metadata /var/lib/neutron/metadata_proxy Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: http-request add-header X-OVN-Network-ID 9021dc49-7e01-42e7-8f32-572dec89afcc Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.774 161766 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc', 'env', 'PROCESS_TAG=haproxy-9021dc49-7e01-42e7-8f32-572dec89afcc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9021dc49-7e01-42e7-8f32-572dec89afcc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00096|binding|INFO|Releasing lport 8069ffae-e153-4a3e-ac83-1cd290da58a3 from this chassis (sb_readonly=0) Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.781 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.801 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.807 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e103 do_prune osdmap full prune enabled Feb 20 04:51:49 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e104 e104: 6 total, 6 up, 6 in Feb 20 04:51:49 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e104: 6 total, 6 up, 6 in Feb 20 04:51:49 localhost kernel: device tap89472e1e-6c entered promiscuous mode Feb 20 04:51:49 localhost kernel: device tap89472e1e-6c left promiscuous mode Feb 20 04:51:49 localhost NetworkManager[5967]: [1771581109.8720] manager: (tap89472e1e-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/25) Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.873 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00097|binding|INFO|Claiming lport 89472e1e-6ca6-404e-8ec3-7651099fb248 for this chassis. Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00098|binding|INFO|89472e1e-6ca6-404e-8ec3-7651099fb248: Claiming fa:16:3e:00:f6:87 10.100.0.6 Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00099|binding|INFO|Claiming lport 533acac2-f7ea-4ecb-b927-c6780a91a0a2 for this chassis. Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00100|binding|INFO|533acac2-f7ea-4ecb-b927-c6780a91a0a2: Claiming fa:16:3e:94:06:ec 19.80.0.250 Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.893 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:f6:87 10.100.0.6'], port_security=['fa:16:3e:00:f6:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-254587356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e6ab74b8-b495-4363-8d40-2356596c895c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-254587356', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '11', 'neutron:security_group_ids': '07d2fe18-fbbf-4547-931e-bb55f378bade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30955323-f649-483f-8215-a2b2b9707d5e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=89472e1e-6ca6-404e-8ec3-7651099fb248) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.897 280808 INFO nova.virt.libvirt.driver [-] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Instance destroyed successfully.#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.898 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:06:ec 19.80.0.250'], port_security=['fa:16:3e:94:06:ec 19.80.0.250'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['89472e1e-6ca6-404e-8ec3-7651099fb248'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1194540045', 'neutron:cidrs': '19.80.0.250/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5faf2589-b0d7-486e-a56b-df0762273b7b', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1194540045', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '07d2fe18-fbbf-4547-931e-bb55f378bade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=b8083fe1-977d-4fae-94f3-b03c7096c58a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=533acac2-f7ea-4ecb-b927-c6780a91a0a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.898 280808 DEBUG nova.objects.instance [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lazy-loading 'resources' on Instance uuid e6ab74b8-b495-4363-8d40-2356596c895c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.903 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00101|binding|INFO|Releasing lport 89472e1e-6ca6-404e-8ec3-7651099fb248 from this chassis (sb_readonly=0) Feb 20 04:51:49 localhost ovn_controller[155916]: 2026-02-20T09:51:49Z|00102|binding|INFO|Releasing lport 533acac2-f7ea-4ecb-b927-c6780a91a0a2 from this chassis (sb_readonly=0) Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.909 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.918 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:f6:87 10.100.0.6'], port_security=['fa:16:3e:00:f6:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-254587356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e6ab74b8-b495-4363-8d40-2356596c895c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-254587356', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '11', 'neutron:security_group_ids': '07d2fe18-fbbf-4547-931e-bb55f378bade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30955323-f649-483f-8215-a2b2b9707d5e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=89472e1e-6ca6-404e-8ec3-7651099fb248) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:49 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:49.923 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:06:ec 19.80.0.250'], port_security=['fa:16:3e:94:06:ec 19.80.0.250'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['89472e1e-6ca6-404e-8ec3-7651099fb248'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1194540045', 'neutron:cidrs': '19.80.0.250/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5faf2589-b0d7-486e-a56b-df0762273b7b', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1194540045', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '07d2fe18-fbbf-4547-931e-bb55f378bade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=b8083fe1-977d-4fae-94f3-b03c7096c58a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=533acac2-f7ea-4ecb-b927-c6780a91a0a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.924 280808 DEBUG nova.virt.libvirt.vif [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-20T09:51:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1557569525',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005625202.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-1557569525',id=7,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2026-02-20T09:51:31Z,launched_on='np0005625203.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005625202.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='a966116e4ddf4bdea0571a1bb751916e',ramdisk_id='',reservation_id='r-ty4xcmp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-425062890',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-425062890-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2026-02-20T09:51:48Z,user_data=None,user_id='0db48d5f6f5e44fc93154cf4b34a94e0',uuid=e6ab74b8-b495-4363-8d40-2356596c895c,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.925 280808 DEBUG nova.network.os_vif_util [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Converting VIF {"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.927 280808 DEBUG nova.network.os_vif_util [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.927 280808 DEBUG os_vif [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.930 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.931 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89472e1e-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.937 280808 DEBUG nova.storage.rbd_utils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] cloning vms/43720f70-168d-461a-8b52-ba71de6033a0_disk@f60618948c814792a5527745cb5e98af to images/2ca20fba-0573-4823-861d-917510483c1a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Feb 20 04:51:49 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:49.953 2 INFO neutron.agent.securitygroups_rpc [None req-3fe948ea-c8e6-429c-836a-702342b0e4ac 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Security group rule updated ['4439e19b-bf91-4420-aff1-6854f961fef4']#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.986 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.989 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 04:51:49 localhost nova_compute[280804]: 2026-02-20 09:51:49.992 280808 INFO os_vif [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c')#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.103 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Received event network-vif-plugged-609a0699-8716-4bf8-9f50-bfeec5f65721 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.104 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.104 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.105 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.105 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] No waiting events found dispatching network-vif-plugged-609a0699-8716-4bf8-9f50-bfeec5f65721 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.106 280808 WARNING nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Received unexpected event network-vif-plugged-609a0699-8716-4bf8-9f50-bfeec5f65721 for instance with vm_state active and task_state None.#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.106 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.106 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.107 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.107 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.107 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] No waiting events found dispatching network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.108 280808 WARNING nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received unexpected event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 for instance with vm_state active and task_state deleting.#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.108 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.109 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.109 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.109 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.110 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] No waiting events found dispatching network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.110 280808 WARNING nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received unexpected event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 for instance with vm_state active and task_state deleting.#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.110 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Received event network-vif-plugged-609a0699-8716-4bf8-9f50-bfeec5f65721 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.111 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.111 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.112 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.112 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] No waiting events found dispatching network-vif-plugged-609a0699-8716-4bf8-9f50-bfeec5f65721 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.112 280808 WARNING nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Received unexpected event network-vif-plugged-609a0699-8716-4bf8-9f50-bfeec5f65721 for instance with vm_state active and task_state None.#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.113 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Received event network-vif-plugged-609a0699-8716-4bf8-9f50-bfeec5f65721 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.113 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.114 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.114 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.114 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] No waiting events found dispatching network-vif-plugged-609a0699-8716-4bf8-9f50-bfeec5f65721 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.115 280808 WARNING nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Received unexpected event network-vif-plugged-609a0699-8716-4bf8-9f50-bfeec5f65721 for instance with vm_state active and task_state None.#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.115 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-unplugged-89472e1e-6ca6-404e-8ec3-7651099fb248 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.116 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.116 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.117 280808 DEBUG oslo_concurrency.lockutils [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.117 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] No waiting events found dispatching network-vif-unplugged-89472e1e-6ca6-404e-8ec3-7651099fb248 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.118 280808 DEBUG nova.compute.manager [req-7a942659-0e34-4746-9a1d-2e1105c73d4d req-99c4277e-1960-4684-ad84-cbf9c5102172 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-unplugged-89472e1e-6ca6-404e-8ec3-7651099fb248 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.138 280808 DEBUG nova.storage.rbd_utils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] flattening images/2ca20fba-0573-4823-861d-917510483c1a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Feb 20 04:51:50 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:50.276 2 INFO neutron.agent.securitygroups_rpc [None req-e1dc84f3-2fa9-4dac-a092-2cb427ae3321 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Security group rule updated ['4439e19b-bf91-4420-aff1-6854f961fef4']#033[00m Feb 20 04:51:50 localhost podman[311253]: Feb 20 04:51:50 localhost podman[311253]: 2026-02-20 09:51:50.31054636 +0000 UTC m=+0.101624762 container create 1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:51:50 localhost podman[311253]: 2026-02-20 09:51:50.263486155 +0000 UTC m=+0.054564587 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 20 04:51:50 localhost systemd[1]: Started libpod-conmon-1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5.scope. Feb 20 04:51:50 localhost systemd[1]: Started libcrun container. Feb 20 04:51:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4985a61fcecabe7dbcc3b24e122fd2acbe82e9520b00301bad0b6c727dfdd9d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:51:50 localhost podman[311253]: 2026-02-20 09:51:50.417227966 +0000 UTC m=+0.208306388 container init 1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:51:50 localhost podman[311253]: 2026-02-20 09:51:50.426906566 +0000 UTC m=+0.217984978 container start 1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:51:50 localhost neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc[311267]: [NOTICE] (311271) : New worker (311273) forked Feb 20 04:51:50 localhost neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc[311267]: [NOTICE] (311271) : Loading success. Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.484 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 89472e1e-6ca6-404e-8ec3-7651099fb248 in datapath 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 unbound from our chassis#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.487 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.488 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[d932061e-bd19-4867-97be-5a1cb355e516]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.488 161766 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 namespace which is not needed anymore#033[00m Feb 20 04:51:50 localhost neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[310904]: [NOTICE] (310908) : haproxy version is 2.8.14-c23fe91 Feb 20 04:51:50 localhost neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[310904]: [NOTICE] (310908) : path to executable is /usr/sbin/haproxy Feb 20 04:51:50 localhost neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[310904]: [WARNING] (310908) : Exiting Master process... Feb 20 04:51:50 localhost neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[310904]: [ALERT] (310908) : Current worker (310910) exited with code 143 (Terminated) Feb 20 04:51:50 localhost neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[310904]: [WARNING] (310908) : All workers exited. Exiting... (0) Feb 20 04:51:50 localhost systemd[1]: libpod-d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161.scope: Deactivated successfully. Feb 20 04:51:50 localhost podman[311300]: 2026-02-20 09:51:50.689192103 +0000 UTC m=+0.081184412 container died d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Feb 20 04:51:50 localhost systemd[1]: tmp-crun.kDdSaD.mount: Deactivated successfully. Feb 20 04:51:50 localhost podman[311300]: 2026-02-20 09:51:50.736803152 +0000 UTC m=+0.128795431 container cleanup d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Feb 20 04:51:50 localhost podman[311314]: 2026-02-20 09:51:50.765424611 +0000 UTC m=+0.064719560 container cleanup d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127) Feb 20 04:51:50 localhost systemd[1]: libpod-conmon-d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161.scope: Deactivated successfully. Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.803 280808 DEBUG oslo_concurrency.lockutils [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Acquiring lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.804 280808 DEBUG oslo_concurrency.lockutils [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.804 280808 DEBUG oslo_concurrency.lockutils [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Acquiring lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.805 280808 DEBUG oslo_concurrency.lockutils [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.805 280808 DEBUG oslo_concurrency.lockutils [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.807 280808 INFO nova.compute.manager [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Terminating instance#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.809 280808 DEBUG nova.compute.manager [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Feb 20 04:51:50 localhost podman[311330]: 2026-02-20 09:51:50.822287419 +0000 UTC m=+0.071750809 container remove d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.827 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[397219fb-8a12-4aa6-a0c0-b8d6b998d182]: (4, ('Fri Feb 20 09:51:50 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 (d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161)\nd056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161\nFri Feb 20 09:51:50 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 (d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161)\nd056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.829 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[29a9bbb0-2381-47c2-ac75-7e01c15d7bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.831 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82c5dcbb-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:50 localhost kernel: device tap82c5dcbb-e0 left promiscuous mode Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.835 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:50 localhost nova_compute[280804]: 2026-02-20 09:51:50.846 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.849 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[2791789e-e62d-4f88-8230-c9fc23607cd7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.871 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c0f228-70c1-4266-8c85-95acdbbec05d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.873 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[ce826a23-4841-4786-b27a-1a7ef73665e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.899 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[629d26d6-7e42-447f-898e-cc61a0524d87]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165313, 'reachable_time': 16463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311351, 'error': None, 'target': 'ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.909 161893 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.910 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[b936acf4-7544-475c-a31d-ed7c56bdce60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.911 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 533acac2-f7ea-4ecb-b927-c6780a91a0a2 in datapath 5faf2589-b0d7-486e-a56b-df0762273b7b unbound from our chassis#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.917 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5faf2589-b0d7-486e-a56b-df0762273b7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.919 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[8975053e-143d-47b6-b91e-153fa12f16a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:50.919 161766 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b namespace which is not needed anymore#033[00m Feb 20 04:51:51 localhost kernel: device tap609a0699-87 left promiscuous mode Feb 20 04:51:51 localhost NetworkManager[5967]: [1771581111.0164] device (tap609a0699-87): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Feb 20 04:51:51 localhost ovn_controller[155916]: 2026-02-20T09:51:51Z|00103|binding|INFO|Releasing lport 609a0699-8716-4bf8-9f50-bfeec5f65721 from this chassis (sb_readonly=0) Feb 20 04:51:51 localhost ovn_controller[155916]: 2026-02-20T09:51:51Z|00104|binding|INFO|Setting lport 609a0699-8716-4bf8-9f50-bfeec5f65721 down in Southbound Feb 20 04:51:51 localhost ovn_controller[155916]: 2026-02-20T09:51:51Z|00105|binding|INFO|Releasing lport ce4822a0-5e7a-4c40-9856-6c8879a12ac7 from this chassis (sb_readonly=0) Feb 20 04:51:51 localhost ovn_controller[155916]: 2026-02-20T09:51:51Z|00106|binding|INFO|Setting lport ce4822a0-5e7a-4c40-9856-6c8879a12ac7 down in Southbound Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.023 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:51 localhost ovn_controller[155916]: 2026-02-20T09:51:51Z|00107|binding|INFO|Removing iface tap609a0699-87 ovn-installed in OVS Feb 20 04:51:51 localhost ovn_controller[155916]: 2026-02-20T09:51:51Z|00108|binding|INFO|Releasing lport 2b93bbc2-5aeb-49cc-b610-6f4f7708d346 from this chassis (sb_readonly=0) Feb 20 04:51:51 localhost ovn_controller[155916]: 2026-02-20T09:51:51Z|00109|binding|INFO|Releasing lport 3bb75901-4106-4229-b593-83c4bfd80b13 from this chassis (sb_readonly=0) Feb 20 04:51:51 localhost ovn_controller[155916]: 2026-02-20T09:51:51Z|00110|binding|INFO|Releasing lport 8069ffae-e153-4a3e-ac83-1cd290da58a3 from this chassis (sb_readonly=0) Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.035 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:a3:f9 10.100.0.12'], port_security=['fa:16:3e:c0:a3:f9 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-420346976', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '90eb8d1f-8d13-4395-9d15-67fdaa60632d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-420346976', 'neutron:project_id': 'e704aae5b1ba49d59262f9aa0c366fb4', 'neutron:revision_number': '11', 'neutron:security_group_ids': '6a912071-fd9c-4d5f-8453-7f993db3506d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad9ac3f8-d9ff-4a1d-8092-e57f93de7b33, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=609a0699-8716-4bf8-9f50-bfeec5f65721) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.038 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:22:88 19.80.0.55'], port_security=['fa:16:3e:ef:22:88 19.80.0.55'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['609a0699-8716-4bf8-9f50-bfeec5f65721'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-288633192', 'neutron:cidrs': '19.80.0.55/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9021dc49-7e01-42e7-8f32-572dec89afcc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-288633192', 'neutron:project_id': 'e704aae5b1ba49d59262f9aa0c366fb4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '6a912071-fd9c-4d5f-8453-7f993db3506d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=7655fb8f-4890-4990-9fdf-4d25849654f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=ce4822a0-5e7a-4c40-9856-6c8879a12ac7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:51 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Deactivated successfully. Feb 20 04:51:51 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Consumed 1.947s CPU time. Feb 20 04:51:51 localhost systemd-machined[205856]: Machine qemu-4-instance-00000008 terminated. Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.102 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.128 280808 DEBUG nova.storage.rbd_utils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] removing snapshot(f60618948c814792a5527745cb5e98af) on rbd image(43720f70-168d-461a-8b52-ba71de6033a0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[310983]: [NOTICE] (310988) : haproxy version is 2.8.14-c23fe91 Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[310983]: [NOTICE] (310988) : path to executable is /usr/sbin/haproxy Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[310983]: [WARNING] (310988) : Exiting Master process... Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[310983]: [WARNING] (310988) : Exiting Master process... Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[310983]: [ALERT] (310988) : Current worker (310990) exited with code 143 (Terminated) Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[310983]: [WARNING] (310988) : All workers exited. Exiting... (0) Feb 20 04:51:51 localhost systemd[1]: libpod-e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24.scope: Deactivated successfully. Feb 20 04:51:51 localhost podman[311371]: 2026-02-20 09:51:51.145632367 +0000 UTC m=+0.075375886 container died e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:51:51 localhost podman[311371]: 2026-02-20 09:51:51.180988306 +0000 UTC m=+0.110731785 container cleanup e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.208 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.224 280808 INFO nova.virt.libvirt.driver [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Deleting instance files /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c_del#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.224 280808 INFO nova.virt.libvirt.driver [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Deletion of /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c_del complete#033[00m Feb 20 04:51:51 localhost podman[311405]: 2026-02-20 09:51:51.227373853 +0000 UTC m=+0.076327502 container cleanup e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:51:51 localhost systemd[1]: libpod-conmon-e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24.scope: Deactivated successfully. Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.246 280808 INFO nova.virt.libvirt.driver [-] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Instance destroyed successfully.#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.247 280808 DEBUG nova.objects.instance [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Lazy-loading 'resources' on Instance uuid 90eb8d1f-8d13-4395-9d15-67fdaa60632d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.282 280808 DEBUG nova.virt.libvirt.vif [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-20T09:51:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-721665546',display_name='tempest-LiveMigrationTest-server-721665546',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005625202.localdomain',hostname='tempest-livemigrationtest-server-721665546',id=8,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2026-02-20T09:51:32Z,launched_on='np0005625204.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005625202.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='e704aae5b1ba49d59262f9aa0c366fb4',ramdisk_id='',reservation_id='r-erbwo03j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-2108133970',owner_user_name='tempest-LiveMigrationTest-2108133970-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2026-02-20T09:51:49Z,user_data=None,user_id='ba15d0e9919d4594a2e6e9d6b3414a5e',uuid=90eb8d1f-8d13-4395-9d15-67fdaa60632d,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "609a0699-8716-4bf8-9f50-bfeec5f65721", "address": "fa:16:3e:c0:a3:f9", "network": {"id": "51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0", "bridge": "br-int", "label": "tempest-LiveMigrationTest-982155183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "e704aae5b1ba49d59262f9aa0c366fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609a0699-87", "ovs_interfaceid": "609a0699-8716-4bf8-9f50-bfeec5f65721", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.282 280808 DEBUG nova.network.os_vif_util [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Converting VIF {"id": "609a0699-8716-4bf8-9f50-bfeec5f65721", "address": "fa:16:3e:c0:a3:f9", "network": {"id": "51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0", "bridge": "br-int", "label": "tempest-LiveMigrationTest-982155183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "e704aae5b1ba49d59262f9aa0c366fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap609a0699-87", "ovs_interfaceid": "609a0699-8716-4bf8-9f50-bfeec5f65721", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.283 280808 DEBUG nova.network.os_vif_util [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:a3:f9,bridge_name='br-int',has_traffic_filtering=True,id=609a0699-8716-4bf8-9f50-bfeec5f65721,network=Network(51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609a0699-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.284 280808 DEBUG os_vif [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:a3:f9,bridge_name='br-int',has_traffic_filtering=True,id=609a0699-8716-4bf8-9f50-bfeec5f65721,network=Network(51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609a0699-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.285 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.286 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap609a0699-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.287 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.289 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.293 280808 INFO os_vif [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:a3:f9,bridge_name='br-int',has_traffic_filtering=True,id=609a0699-8716-4bf8-9f50-bfeec5f65721,network=Network(51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap609a0699-87')#033[00m Feb 20 04:51:51 localhost podman[311419]: 2026-02-20 09:51:51.295335769 +0000 UTC m=+0.091420917 container remove e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.299 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fee482-9f97-46d0-98d6-2f59149a359c]: (4, ('Fri Feb 20 09:51:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b (e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24)\ne81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24\nFri Feb 20 09:51:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b (e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24)\ne81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.300 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[65260a35-8de4-4572-acfa-4e5e6a45f939]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.301 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5faf2589-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:51 localhost kernel: device tap5faf2589-b0 left promiscuous mode Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.310 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.315 280808 INFO nova.compute.manager [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Took 1.66 seconds to destroy the instance on the hypervisor.#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.315 280808 DEBUG oslo.service.loopingcall [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.315 280808 DEBUG nova.compute.manager [-] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.315 280808 DEBUG nova.network.neutron [-] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.315 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[2a143ed6-2075-4f51-9ac4-347a000c733b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.332 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[c7591d16-43a7-40f0-b585-fc0bdfb769ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.334 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[637e99a0-1d6f-42db-98bf-b890fe9ec216]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.344 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[f69ca061-eb1c-45de-a297-813059eed7c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165401, 'reachable_time': 42739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311463, 'error': None, 'target': 'ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.347 161893 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.347 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[c17b6370-80ed-4c2d-86e6-eecf7f215a70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.348 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 89472e1e-6ca6-404e-8ec3-7651099fb248 in datapath 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 unbound from our chassis#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.353 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.353 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[1347cd77-d2f8-445f-bd94-7aeae9afb511]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.354 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 533acac2-f7ea-4ecb-b927-c6780a91a0a2 in datapath 5faf2589-b0d7-486e-a56b-df0762273b7b unbound from our chassis#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.358 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5faf2589-b0d7-486e-a56b-df0762273b7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.359 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[f340051f-fe26-4ec3-98fe-fe1f157fd3fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.359 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 89472e1e-6ca6-404e-8ec3-7651099fb248 in datapath 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 unbound from our chassis#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.364 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.364 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7f8eec-6dd2-4c97-b4d3-34c29b9a1627]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.365 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 533acac2-f7ea-4ecb-b927-c6780a91a0a2 in datapath 5faf2589-b0d7-486e-a56b-df0762273b7b unbound from our chassis#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.369 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5faf2589-b0d7-486e-a56b-df0762273b7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.370 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[1fad37ba-f9d9-4b48-b3a9-4d0ce7a14185]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.370 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 609a0699-8716-4bf8-9f50-bfeec5f65721 in datapath 51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0 unbound from our chassis#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.374 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.375 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e1abf5-5dfd-41bb-a4bd-f2825b3f077c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.375 161766 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0 namespace which is not needed anymore#033[00m Feb 20 04:51:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 385 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 5.1 MiB/s wr, 228 op/s Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0[311109]: [NOTICE] (311113) : haproxy version is 2.8.14-c23fe91 Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0[311109]: [NOTICE] (311113) : path to executable is /usr/sbin/haproxy Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0[311109]: [WARNING] (311113) : Exiting Master process... Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0[311109]: [ALERT] (311113) : Current worker (311115) exited with code 143 (Terminated) Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0[311109]: [WARNING] (311113) : All workers exited. Exiting... (0) Feb 20 04:51:51 localhost systemd[1]: libpod-03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5.scope: Deactivated successfully. Feb 20 04:51:51 localhost podman[311484]: 2026-02-20 09:51:51.558754646 +0000 UTC m=+0.082441805 container died 03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2) Feb 20 04:51:51 localhost podman[311484]: 2026-02-20 09:51:51.586662656 +0000 UTC m=+0.110349775 container cleanup 03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Feb 20 04:51:51 localhost systemd[1]: var-lib-containers-storage-overlay-e8d40c29d76aed339fa3bf205820c8601934086b8926220335e01ab806b8a045-merged.mount: Deactivated successfully. Feb 20 04:51:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5-userdata-shm.mount: Deactivated successfully. Feb 20 04:51:51 localhost systemd[1]: var-lib-containers-storage-overlay-25874cce964c3a6285cf85fa495d042d9286051c150a5c8435d3247b33003a9f-merged.mount: Deactivated successfully. Feb 20 04:51:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e81427c547a001767cbb2cf2d08695c08fa9bee8449c107e557add5b67609a24-userdata-shm.mount: Deactivated successfully. Feb 20 04:51:51 localhost systemd[1]: run-netns-ovnmeta\x2d5faf2589\x2db0d7\x2d486e\x2da56b\x2ddf0762273b7b.mount: Deactivated successfully. Feb 20 04:51:51 localhost systemd[1]: var-lib-containers-storage-overlay-1c19ad39752ca9a180ebdec81c121ac34bc744103f0f4b89dcea902c08c74c80-merged.mount: Deactivated successfully. Feb 20 04:51:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d056aef4baab96b948f2ec05121db86cf7b9c650353b8bfaadeb0a02de265161-userdata-shm.mount: Deactivated successfully. Feb 20 04:51:51 localhost systemd[1]: run-netns-ovnmeta\x2d82c5dcbb\x2de77d\x2d4af1\x2dbf3e\x2d89ecf6e35192.mount: Deactivated successfully. Feb 20 04:51:51 localhost systemd[1]: tmp-crun.pc8Xvg.mount: Deactivated successfully. Feb 20 04:51:51 localhost podman[311497]: 2026-02-20 09:51:51.648709143 +0000 UTC m=+0.083090773 container cleanup 03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:51:51 localhost systemd[1]: libpod-conmon-03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5.scope: Deactivated successfully. Feb 20 04:51:51 localhost podman[311510]: 2026-02-20 09:51:51.700895745 +0000 UTC m=+0.091156269 container remove 03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.705 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[7536ddcb-40d4-4a8e-8b5b-fb183e7b4f95]: (4, ('Fri Feb 20 09:51:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0 (03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5)\n03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5\nFri Feb 20 09:51:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0 (03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5)\n03f5a53996dcb276651795e6a4cd576498129387343128580b37994bd639d1a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.707 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[dccee40e-fee8-4055-b2a4-de621ca1c6d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.708 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51f8ae9c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.748 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:51 localhost kernel: device tap51f8ae9c-10 left promiscuous mode Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.755 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.758 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[cf16cbc9-8dde-4b01-abdc-7df67f6fb23f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.772 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9a00e1-c49b-462c-a337-e521dd1341d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.773 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[775a9a4f-fa92-4934-a2b5-23659d3a10e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.783 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[ac03c2e6-3df1-4bb0-be14-44701a604921]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165479, 'reachable_time': 39320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311532, 'error': None, 'target': 'ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.785 161893 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-51f8ae9c-1ccc-4ec5-8a06-5c7802ad29e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.785 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[42803cd8-b777-47b6-9989-b4a6a5d36bd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.787 161766 INFO neutron.agent.ovn.metadata.agent [-] Port ce4822a0-5e7a-4c40-9856-6c8879a12ac7 in datapath 9021dc49-7e01-42e7-8f32-572dec89afcc unbound from our chassis#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.789 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9021dc49-7e01-42e7-8f32-572dec89afcc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.789 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[93f4e80c-88f5-48b7-bb8a-d539c9f6b19d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:51.790 161766 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc namespace which is not needed anymore#033[00m Feb 20 04:51:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e104 do_prune osdmap full prune enabled Feb 20 04:51:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e105 e105: 6 total, 6 up, 6 in Feb 20 04:51:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e105: 6 total, 6 up, 6 in Feb 20 04:51:51 localhost podman[311531]: 2026-02-20 09:51:51.847456313 +0000 UTC m=+0.064009090 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.930 280808 DEBUG nova.storage.rbd_utils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] creating snapshot(snap) on rbd image(2ca20fba-0573-4823-861d-917510483c1a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc[311267]: [NOTICE] (311271) : haproxy version is 2.8.14-c23fe91 Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc[311267]: [NOTICE] (311271) : path to executable is /usr/sbin/haproxy Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc[311267]: [WARNING] (311271) : Exiting Master process... Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc[311267]: [WARNING] (311271) : Exiting Master process... Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc[311267]: [ALERT] (311271) : Current worker (311273) exited with code 143 (Terminated) Feb 20 04:51:51 localhost neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc[311267]: [WARNING] (311271) : All workers exited. Exiting... (0) Feb 20 04:51:51 localhost systemd[1]: libpod-1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5.scope: Deactivated successfully. Feb 20 04:51:51 localhost podman[311567]: 2026-02-20 09:51:51.94893586 +0000 UTC m=+0.068335607 container died 1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:51:51 localhost podman[311531]: 2026-02-20 09:51:51.981924116 +0000 UTC m=+0.198476893 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:51:51 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.998 280808 INFO nova.virt.libvirt.driver [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Deleting instance files /var/lib/nova/instances/90eb8d1f-8d13-4395-9d15-67fdaa60632d_del#033[00m Feb 20 04:51:51 localhost nova_compute[280804]: 2026-02-20 09:51:51.999 280808 INFO nova.virt.libvirt.driver [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Deletion of /var/lib/nova/instances/90eb8d1f-8d13-4395-9d15-67fdaa60632d_del complete#033[00m Feb 20 04:51:52 localhost podman[311567]: 2026-02-20 09:51:52.03937505 +0000 UTC m=+0.158774777 container cleanup 1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true) Feb 20 04:51:52 localhost podman[311588]: 2026-02-20 09:51:52.048928366 +0000 UTC m=+0.101881108 container cleanup 1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:51:52 localhost systemd[1]: libpod-conmon-1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5.scope: Deactivated successfully. Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.060 280808 INFO nova.compute.manager [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.061 280808 DEBUG oslo.service.loopingcall [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.062 280808 DEBUG nova.compute.manager [-] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.062 280808 DEBUG nova.network.neutron [-] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Feb 20 04:51:52 localhost podman[311614]: 2026-02-20 09:51:52.107478699 +0000 UTC m=+0.055117141 container remove 1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:51:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:52.113 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[64338b9d-ebec-44fd-a352-e510bb7466d9]: (4, ('Fri Feb 20 09:51:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc (1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5)\n1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5\nFri Feb 20 09:51:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc (1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5)\n1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:52.114 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[003e8654-f27e-4915-a677-6e39b100644f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:52.115 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9021dc49-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.117 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:52 localhost kernel: device tap9021dc49-70 left promiscuous mode Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.125 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:52.128 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[dca5d374-88a3-42d3-a24e-26732ddd4305]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:52.143 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[d59ab9d8-8121-4330-9792-0e477a645b12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:52.144 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[500fbe19-0da7-4c3e-ba9f-ba08c071335f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:52.154 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[941d26ba-5890-4392-87e0-fdecc78e7aec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165571, 'reachable_time': 30003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311632, 'error': None, 'target': 'ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:52.155 161893 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9021dc49-7e01-42e7-8f32-572dec89afcc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 20 04:51:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:52.155 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[39c12689-102d-4d6b-80c2-70e4f664cb81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.344 280808 DEBUG nova.network.neutron [-] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.363 280808 INFO nova.compute.manager [-] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Took 1.05 seconds to deallocate network for instance.#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.371 280808 DEBUG nova.compute.manager [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.372 280808 DEBUG oslo_concurrency.lockutils [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.372 280808 DEBUG oslo_concurrency.lockutils [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.373 280808 DEBUG oslo_concurrency.lockutils [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.374 280808 DEBUG nova.compute.manager [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] No waiting events found dispatching network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.374 280808 WARNING nova.compute.manager [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received unexpected event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 for instance with vm_state active and task_state deleting.#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.375 280808 DEBUG nova.compute.manager [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Received event network-vif-unplugged-609a0699-8716-4bf8-9f50-bfeec5f65721 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.375 280808 DEBUG oslo_concurrency.lockutils [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.376 280808 DEBUG oslo_concurrency.lockutils [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.376 280808 DEBUG oslo_concurrency.lockutils [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.377 280808 DEBUG nova.compute.manager [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] No waiting events found dispatching network-vif-unplugged-609a0699-8716-4bf8-9f50-bfeec5f65721 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.377 280808 DEBUG nova.compute.manager [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Received event network-vif-unplugged-609a0699-8716-4bf8-9f50-bfeec5f65721 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.378 280808 DEBUG nova.compute.manager [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Received event network-vif-plugged-609a0699-8716-4bf8-9f50-bfeec5f65721 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.378 280808 DEBUG oslo_concurrency.lockutils [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.379 280808 DEBUG oslo_concurrency.lockutils [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.379 280808 DEBUG oslo_concurrency.lockutils [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.380 280808 DEBUG nova.compute.manager [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] No waiting events found dispatching network-vif-plugged-609a0699-8716-4bf8-9f50-bfeec5f65721 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.380 280808 WARNING nova.compute.manager [req-09648326-9446-4669-850f-e066da091967 req-7c45d44a-c4c0-4abc-b735-99d0f9efcf8f d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Received unexpected event network-vif-plugged-609a0699-8716-4bf8-9f50-bfeec5f65721 for instance with vm_state active and task_state deleting.#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.410 280808 DEBUG oslo_concurrency.lockutils [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.411 280808 DEBUG oslo_concurrency.lockutils [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.413 280808 DEBUG oslo_concurrency.lockutils [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.449 280808 INFO nova.scheduler.client.report [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Deleted allocations for instance e6ab74b8-b495-4363-8d40-2356596c895c#033[00m Feb 20 04:51:52 localhost nova_compute[280804]: 2026-02-20 09:51:52.511 280808 DEBUG oslo_concurrency.lockutils [None req-d0ae9766-e207-4aaf-ac44-7d955a567e4a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 2.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:51:52 localhost systemd[1]: var-lib-containers-storage-overlay-4985a61fcecabe7dbcc3b24e122fd2acbe82e9520b00301bad0b6c727dfdd9d5-merged.mount: Deactivated successfully. Feb 20 04:51:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1671a3894eb20baa795da8c76793f9b31826bd6542eb8c72e8827172584982a5-userdata-shm.mount: Deactivated successfully. Feb 20 04:51:52 localhost systemd[1]: run-netns-ovnmeta\x2d9021dc49\x2d7e01\x2d42e7\x2d8f32\x2d572dec89afcc.mount: Deactivated successfully. Feb 20 04:51:52 localhost systemd[1]: run-netns-ovnmeta\x2d51f8ae9c\x2d1ccc\x2d4ec5\x2d8a06\x2d5c7802ad29e0.mount: Deactivated successfully. Feb 20 04:51:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e105 do_prune osdmap full prune enabled Feb 20 04:51:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e106 e106: 6 total, 6 up, 6 in Feb 20 04:51:52 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e106: 6 total, 6 up, 6 in Feb 20 04:51:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v123: 177 pgs: 177 active+clean; 385 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 624 KiB/s wr, 163 op/s Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.434 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Acquiring lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.435 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.451 280808 DEBUG nova.compute.manager [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Feb 20 04:51:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:51:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:51:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:51:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:51:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:51:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.513 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.514 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.519 280808 DEBUG nova.virt.hardware [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.520 280808 INFO nova.compute.claims [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Claim successful on node np0005625202.localdomain#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.640 280808 INFO nova.virt.libvirt.driver [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Snapshot image upload complete#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.640 280808 DEBUG nova.compute.manager [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:53 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:53.672 2 INFO neutron.agent.securitygroups_rpc [None req-2028af40-368e-4b25-90de-8401d53be72c 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Security group member updated ['07d2fe18-fbbf-4547-931e-bb55f378bade']#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.673 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.700 280808 INFO nova.compute.manager [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Shelve offloading#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.711 280808 INFO nova.virt.libvirt.driver [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance destroyed successfully.#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.711 280808 DEBUG nova.compute.manager [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.714 280808 DEBUG oslo_concurrency.lockutils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.715 280808 DEBUG oslo_concurrency.lockutils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquired lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.715 280808 DEBUG nova.network.neutron [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 20 04:51:53 localhost systemd[1]: Stopping User Manager for UID 42436... Feb 20 04:51:53 localhost systemd[310515]: Activating special unit Exit the Session... Feb 20 04:51:53 localhost systemd[310515]: Stopped target Main User Target. Feb 20 04:51:53 localhost systemd[310515]: Stopped target Basic System. Feb 20 04:51:53 localhost systemd[310515]: Stopped target Paths. Feb 20 04:51:53 localhost systemd[310515]: Stopped target Sockets. Feb 20 04:51:53 localhost systemd[310515]: Stopped target Timers. Feb 20 04:51:53 localhost systemd[310515]: Stopped Mark boot as successful after the user session has run 2 minutes. Feb 20 04:51:53 localhost systemd[310515]: Stopped Daily Cleanup of User's Temporary Directories. Feb 20 04:51:53 localhost systemd[310515]: Closed D-Bus User Message Bus Socket. Feb 20 04:51:53 localhost systemd[310515]: Stopped Create User's Volatile Files and Directories. Feb 20 04:51:53 localhost systemd[310515]: Removed slice User Application Slice. Feb 20 04:51:53 localhost systemd[310515]: Reached target Shutdown. Feb 20 04:51:53 localhost systemd[310515]: Finished Exit the Session. Feb 20 04:51:53 localhost systemd[310515]: Reached target Exit the Session. Feb 20 04:51:53 localhost systemd[1]: user@42436.service: Deactivated successfully. Feb 20 04:51:53 localhost systemd[1]: Stopped User Manager for UID 42436. Feb 20 04:51:53 localhost systemd[1]: Stopping User Runtime Directory /run/user/42436... Feb 20 04:51:53 localhost systemd[1]: run-user-42436.mount: Deactivated successfully. Feb 20 04:51:53 localhost systemd[1]: user-runtime-dir@42436.service: Deactivated successfully. Feb 20 04:51:53 localhost systemd[1]: Stopped User Runtime Directory /run/user/42436. Feb 20 04:51:53 localhost systemd[1]: Removed slice User Slice of UID 42436. Feb 20 04:51:53 localhost nova_compute[280804]: 2026-02-20 09:51:53.950 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:51:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/151132608' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.200 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.208 280808 DEBUG nova.compute.provider_tree [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.227 280808 DEBUG nova.scheduler.client.report [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.248 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.249 280808 DEBUG nova.compute.manager [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.297 280808 DEBUG nova.compute.manager [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.298 280808 DEBUG nova.network.neutron [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.315 280808 INFO nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.336 280808 DEBUG nova.compute.manager [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.391 280808 DEBUG nova.network.neutron [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.445 280808 DEBUG nova.compute.manager [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.447 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.448 280808 INFO nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Creating image(s)#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.487 280808 DEBUG nova.storage.rbd_utils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] rbd image a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.529 280808 DEBUG nova.storage.rbd_utils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] rbd image a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.569 280808 DEBUG nova.storage.rbd_utils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] rbd image a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.575 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.658 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.659 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Acquiring lock "3692da63af034f7d594aac7c4b8eda10742f09b0" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.660 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "3692da63af034f7d594aac7c4b8eda10742f09b0" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.661 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "3692da63af034f7d594aac7c4b8eda10742f09b0" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.699 280808 DEBUG nova.storage.rbd_utils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] rbd image a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.704 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0 a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.745 280808 WARNING oslo_policy.policy [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.745 280808 WARNING oslo_policy.policy [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.751 280808 DEBUG nova.policy [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2188e6de9cae445dadfba1541701ebd2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f299da1b635f4dafbe62328983ad1fae', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.755 280808 DEBUG nova.network.neutron [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.772 280808 DEBUG oslo_concurrency.lockutils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Releasing lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.784 280808 INFO nova.virt.libvirt.driver [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance destroyed successfully.#033[00m Feb 20 04:51:54 localhost nova_compute[280804]: 2026-02-20 09:51:54.785 280808 DEBUG nova.objects.instance [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lazy-loading 'resources' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:55 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:55.210 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.211 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:55 localhost ovn_metadata_agent[161761]: 2026-02-20 09:51:55.212 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.280 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0 a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.389 280808 DEBUG nova.storage.rbd_utils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] resizing rbd image a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Feb 20 04:51:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 177 active+clean; 307 MiB data, 1021 MiB used, 41 GiB / 42 GiB avail; 8.1 MiB/s rd, 7.8 MiB/s wr, 301 op/s Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.546 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.547 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.548 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.557 280808 DEBUG nova.objects.instance [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lazy-loading 'migration_context' on Instance uuid a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.574 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.574 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.575 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.575 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquired lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.576 280808 DEBUG nova.network.neutron [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.576 280808 DEBUG nova.objects.instance [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.578 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.578 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Ensure instance console log exists: /var/lib/nova/instances/a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.579 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.579 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.580 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.637 280808 DEBUG nova.network.neutron [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.699 280808 INFO nova.virt.libvirt.driver [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Deleting instance files /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0_del#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.700 280808 INFO nova.virt.libvirt.driver [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Deletion of /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0_del complete#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.769 280808 INFO nova.scheduler.client.report [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Deleted allocations for instance 43720f70-168d-461a-8b52-ba71de6033a0#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.814 280808 DEBUG oslo_concurrency.lockutils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.816 280808 DEBUG oslo_concurrency.lockutils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.834 280808 DEBUG nova.network.neutron [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.857 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Releasing lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.858 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Feb 20 04:51:55 localhost nova_compute[280804]: 2026-02-20 09:51:55.886 280808 DEBUG oslo_concurrency.processutils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.238 280808 DEBUG nova.network.neutron [-] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.259 280808 INFO nova.compute.manager [-] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Took 4.20 seconds to deallocate network for instance.#033[00m Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.289 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:51:56 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3016823328' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.321 280808 DEBUG oslo_concurrency.lockutils [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.334 280808 DEBUG oslo_concurrency.processutils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.340 280808 DEBUG nova.compute.provider_tree [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.365 280808 DEBUG nova.scheduler.client.report [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.385 280808 DEBUG oslo_concurrency.lockutils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.389 280808 DEBUG oslo_concurrency.lockutils [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.392 280808 DEBUG oslo_concurrency.lockutils [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.426 280808 INFO nova.scheduler.client.report [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Deleted allocations for instance 90eb8d1f-8d13-4395-9d15-67fdaa60632d#033[00m Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.436 280808 DEBUG oslo_concurrency.lockutils [None req-dbb7fde0-28a2-4f71-8b7f-87558b86ffba 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0" "released" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: held 20.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:56 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:56.494 2 INFO neutron.agent.securitygroups_rpc [req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c req-73160dbe-971e-4219-ac30-c0c28777ca1e 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Security group member updated ['4439e19b-bf91-4420-aff1-6854f961fef4']#033[00m Feb 20 04:51:56 localhost nova_compute[280804]: 2026-02-20 09:51:56.534 280808 DEBUG oslo_concurrency.lockutils [None req-ebc175b2-f97e-4c5f-b9cb-4d6079d3f7ff ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Lock "90eb8d1f-8d13-4395-9d15-67fdaa60632d" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 5.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:56 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:56.599 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:55Z, description=, device_id=a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cf687b2d-16df-4973-a938-10a916e32626, ip_allocation=immediate, mac_address=fa:16:3e:ba:d1:ae, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:51:31Z, description=, dns_domain=, id=9ac533b5-90af-4cbe-be32-55de197d993c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-596806407-network, port_security_enabled=True, project_id=f299da1b635f4dafbe62328983ad1fae, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48730, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=606, status=ACTIVE, subnets=['12a57feb-547d-47f0-b3aa-28f3df8f6f52'], tags=[], tenant_id=f299da1b635f4dafbe62328983ad1fae, updated_at=2026-02-20T09:51:33Z, vlan_transparent=None, network_id=9ac533b5-90af-4cbe-be32-55de197d993c, port_security_enabled=True, project_id=f299da1b635f4dafbe62328983ad1fae, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['4439e19b-bf91-4420-aff1-6854f961fef4'], standard_attr_id=722, status=DOWN, tags=[], tenant_id=f299da1b635f4dafbe62328983ad1fae, updated_at=2026-02-20T09:51:55Z on network 9ac533b5-90af-4cbe-be32-55de197d993c#033[00m Feb 20 04:51:56 localhost dnsmasq[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/addn_hosts - 2 addresses Feb 20 04:51:56 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/host Feb 20 04:51:56 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/opts Feb 20 04:51:56 localhost podman[311881]: 2026-02-20 09:51:56.811330723 +0000 UTC m=+0.061316079 container kill e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:51:57 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:57.042 263745 INFO neutron.agent.dhcp.agent [None req-3619cb84-48f1-4100-85d0-cce46864e2c6 - - - - - -] DHCP configuration for ports {'cf687b2d-16df-4973-a938-10a916e32626'} is completed#033[00m Feb 20 04:51:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v125: 177 pgs: 177 active+clean; 307 MiB data, 1021 MiB used, 41 GiB / 42 GiB avail; 6.3 MiB/s rd, 6.2 MiB/s wr, 216 op/s Feb 20 04:51:57 localhost nova_compute[280804]: 2026-02-20 09:51:57.419 280808 DEBUG nova.network.neutron [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Successfully created port: cf687b2d-16df-4973-a938-10a916e32626 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m Feb 20 04:51:57 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:57.450 2 INFO neutron.agent.securitygroups_rpc [None req-426d7c59-43bb-4b5f-98f0-2945e94d9430 ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Security group member updated ['6a912071-fd9c-4d5f-8453-7f993db3506d']#033[00m Feb 20 04:51:57 localhost nova_compute[280804]: 2026-02-20 09:51:57.509 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:51:57 localhost nova_compute[280804]: 2026-02-20 09:51:57.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:51:57 localhost nova_compute[280804]: 2026-02-20 09:51:57.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:51:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:51:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e106 do_prune osdmap full prune enabled Feb 20 04:51:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e107 e107: 6 total, 6 up, 6 in Feb 20 04:51:57 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e107: 6 total, 6 up, 6 in Feb 20 04:51:57 localhost nova_compute[280804]: 2026-02-20 09:51:57.554 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:57 localhost nova_compute[280804]: 2026-02-20 09:51:57.555 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:57 localhost nova_compute[280804]: 2026-02-20 09:51:57.555 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:57 localhost nova_compute[280804]: 2026-02-20 09:51:57.556 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:51:57 localhost nova_compute[280804]: 2026-02-20 09:51:57.556 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:51:57 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2462011471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:51:57 localhost nova_compute[280804]: 2026-02-20 09:51:57.980 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:58 localhost openstack_network_exporter[243776]: ERROR 09:51:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:51:58 localhost openstack_network_exporter[243776]: Feb 20 04:51:58 localhost openstack_network_exporter[243776]: ERROR 09:51:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:51:58 localhost openstack_network_exporter[243776]: Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.245 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.248 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11585MB free_disk=41.70025634765625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.249 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.249 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.318 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Instance a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.319 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.319 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.371 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:51:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:51:58 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2189930680' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.816 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.824 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.841 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.865 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.865 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:51:58 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:58.866 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005625202.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:55Z, description=, device_id=a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com, extra_dhcp_opts=[], fixed_ips=[], id=cf687b2d-16df-4973-a938-10a916e32626, ip_allocation=immediate, mac_address=fa:16:3e:ba:d1:ae, name=, network_id=9ac533b5-90af-4cbe-be32-55de197d993c, port_security_enabled=True, project_id=f299da1b635f4dafbe62328983ad1fae, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['4439e19b-bf91-4420-aff1-6854f961fef4'], standard_attr_id=722, status=DOWN, tags=[], tenant_id=f299da1b635f4dafbe62328983ad1fae, updated_at=2026-02-20T09:51:57Z on network 9ac533b5-90af-4cbe-be32-55de197d993c#033[00m Feb 20 04:51:58 localhost nova_compute[280804]: 2026-02-20 09:51:58.953 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:51:59 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:59.080 2 INFO neutron.agent.securitygroups_rpc [None req-bcd9a2b7-ab94-49ae-b942-9c3b757c3657 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Security group member updated ['07d2fe18-fbbf-4547-931e-bb55f378bade']#033[00m Feb 20 04:51:59 localhost podman[311965]: 2026-02-20 09:51:59.084290493 +0000 UTC m=+0.064504384 container kill e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:51:59 localhost dnsmasq[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/addn_hosts - 2 addresses Feb 20 04:51:59 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/host Feb 20 04:51:59 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/opts Feb 20 04:51:59 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:51:59.273 263745 INFO neutron.agent.dhcp.agent [None req-536480cc-de57-4b9f-a99c-52998ddd13b8 - - - - - -] DHCP configuration for ports {'cf687b2d-16df-4973-a938-10a916e32626'} is completed#033[00m Feb 20 04:51:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 277 MiB data, 970 MiB used, 41 GiB / 42 GiB avail; 6.3 MiB/s rd, 8.4 MiB/s wr, 283 op/s Feb 20 04:51:59 localhost neutron_sriov_agent[256551]: 2026-02-20 09:51:59.522 2 INFO neutron.agent.securitygroups_rpc [None req-da379379-3275-471e-8ade-92d9716364d1 ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Security group member updated ['6a912071-fd9c-4d5f-8453-7f993db3506d']#033[00m Feb 20 04:51:59 localhost nova_compute[280804]: 2026-02-20 09:51:59.861 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:51:59 localhost nova_compute[280804]: 2026-02-20 09:51:59.862 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:51:59 localhost nova_compute[280804]: 2026-02-20 09:51:59.951 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:51:59 localhost nova_compute[280804]: 2026-02-20 09:51:59.952 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:51:59 localhost nova_compute[280804]: 2026-02-20 09:51:59.952 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:52:00 localhost nova_compute[280804]: 2026-02-20 09:52:00.448 280808 DEBUG nova.network.neutron [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Successfully updated port: cf687b2d-16df-4973-a938-10a916e32626 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Feb 20 04:52:00 localhost nova_compute[280804]: 2026-02-20 09:52:00.513 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Acquiring lock "refresh_cache-a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:52:00 localhost nova_compute[280804]: 2026-02-20 09:52:00.513 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Acquired lock "refresh_cache-a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:52:00 localhost nova_compute[280804]: 2026-02-20 09:52:00.514 280808 DEBUG nova.network.neutron [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 20 04:52:00 localhost nova_compute[280804]: 2026-02-20 09:52:00.521 280808 DEBUG nova.compute.manager [req-f4e7e41c-8b55-4df3-a232-0aa8c4da2154 req-bfa35ed8-7955-4a9e-953a-5d4282765b72 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Received event network-changed-cf687b2d-16df-4973-a938-10a916e32626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:52:00 localhost nova_compute[280804]: 2026-02-20 09:52:00.522 280808 DEBUG nova.compute.manager [req-f4e7e41c-8b55-4df3-a232-0aa8c4da2154 req-bfa35ed8-7955-4a9e-953a-5d4282765b72 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Refreshing instance network info cache due to event network-changed-cf687b2d-16df-4973-a938-10a916e32626. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Feb 20 04:52:00 localhost nova_compute[280804]: 2026-02-20 09:52:00.522 280808 DEBUG oslo_concurrency.lockutils [req-f4e7e41c-8b55-4df3-a232-0aa8c4da2154 req-bfa35ed8-7955-4a9e-953a-5d4282765b72 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "refresh_cache-a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:52:00 localhost nova_compute[280804]: 2026-02-20 09:52:00.678 280808 DEBUG nova.network.neutron [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 20 04:52:01 localhost nova_compute[280804]: 2026-02-20 09:52:01.291 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:52:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:52:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v128: 177 pgs: 177 active+clean; 273 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 6.0 MiB/s rd, 8.0 MiB/s wr, 294 op/s Feb 20 04:52:01 localhost podman[311985]: 2026-02-20 09:52:01.440765976 +0000 UTC m=+0.084507801 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, config_id=openstack_network_exporter, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, version=9.7, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 20 04:52:01 localhost podman[311985]: 2026-02-20 09:52:01.478694945 +0000 UTC m=+0.122436790 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7) Feb 20 04:52:01 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:52:01 localhost podman[311986]: 2026-02-20 09:52:01.498399754 +0000 UTC m=+0.139649832 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible) Feb 20 04:52:01 localhost podman[311986]: 2026-02-20 09:52:01.511162997 +0000 UTC m=+0.152413065 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute) Feb 20 04:52:01 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:52:01 localhost nova_compute[280804]: 2026-02-20 09:52:01.973 280808 DEBUG nova.network.neutron [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Updating instance_info_cache with network_info: [{"id": "cf687b2d-16df-4973-a938-10a916e32626", "address": "fa:16:3e:ba:d1:ae", "network": {"id": "9ac533b5-90af-4cbe-be32-55de197d993c", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-596806407-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "f299da1b635f4dafbe62328983ad1fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf687b2d-16", "ovs_interfaceid": "cf687b2d-16df-4973-a938-10a916e32626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.022 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Releasing lock "refresh_cache-a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.023 280808 DEBUG nova.compute.manager [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Instance network_info: |[{"id": "cf687b2d-16df-4973-a938-10a916e32626", "address": "fa:16:3e:ba:d1:ae", "network": {"id": "9ac533b5-90af-4cbe-be32-55de197d993c", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-596806407-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "f299da1b635f4dafbe62328983ad1fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf687b2d-16", "ovs_interfaceid": "cf687b2d-16df-4973-a938-10a916e32626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.024 280808 DEBUG oslo_concurrency.lockutils [req-f4e7e41c-8b55-4df3-a232-0aa8c4da2154 req-bfa35ed8-7955-4a9e-953a-5d4282765b72 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquired lock "refresh_cache-a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.024 280808 DEBUG nova.network.neutron [req-f4e7e41c-8b55-4df3-a232-0aa8c4da2154 req-bfa35ed8-7955-4a9e-953a-5d4282765b72 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Refreshing network info cache for port cf687b2d-16df-4973-a938-10a916e32626 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.029 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Start _get_guest_xml network_info=[{"id": "cf687b2d-16df-4973-a938-10a916e32626", "address": "fa:16:3e:ba:d1:ae", "network": {"id": "9ac533b5-90af-4cbe-be32-55de197d993c", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-596806407-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "f299da1b635f4dafbe62328983ad1fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf687b2d-16", "ovs_interfaceid": "cf687b2d-16df-4973-a938-10a916e32626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-20T09:49:57Z,direct_url=,disk_format='qcow2',id=06bd71fd-c415-45d9-b669-46209b7ca2f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='91bce661d685472eb3e7cacab17bf52a',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2026-02-20T09:49:59Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '06bd71fd-c415-45d9-b669-46209b7ca2f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.035 280808 WARNING nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.039 280808 DEBUG nova.virt.libvirt.host [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Searching host: 'np0005625202.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.040 280808 DEBUG nova.virt.libvirt.host [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.049 280808 DEBUG nova.virt.libvirt.host [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Searching host: 'np0005625202.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.050 280808 DEBUG nova.virt.libvirt.host [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.050 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.051 280808 DEBUG nova.virt.hardware [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-20T09:49:56Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='40a6f41a-8891-4900-942e-688a656af142',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-20T09:49:57Z,direct_url=,disk_format='qcow2',id=06bd71fd-c415-45d9-b669-46209b7ca2f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='91bce661d685472eb3e7cacab17bf52a',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2026-02-20T09:49:59Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.052 280808 DEBUG nova.virt.hardware [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.052 280808 DEBUG nova.virt.hardware [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.052 280808 DEBUG nova.virt.hardware [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.053 280808 DEBUG nova.virt.hardware [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.053 280808 DEBUG nova.virt.hardware [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.054 280808 DEBUG nova.virt.hardware [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.054 280808 DEBUG nova.virt.hardware [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.054 280808 DEBUG nova.virt.hardware [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.055 280808 DEBUG nova.virt.hardware [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.055 280808 DEBUG nova.virt.hardware [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.060 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:52:02 localhost podman[312040]: 2026-02-20 09:52:02.076341113 +0000 UTC m=+0.066268952 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:52:02 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 7 addresses Feb 20 04:52:02 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:02 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.507 280808 DEBUG nova.network.neutron [req-f4e7e41c-8b55-4df3-a232-0aa8c4da2154 req-bfa35ed8-7955-4a9e-953a-5d4282765b72 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Updated VIF entry in instance network info cache for port cf687b2d-16df-4973-a938-10a916e32626. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.508 280808 DEBUG nova.network.neutron [req-f4e7e41c-8b55-4df3-a232-0aa8c4da2154 req-bfa35ed8-7955-4a9e-953a-5d4282765b72 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Updating instance_info_cache with network_info: [{"id": "cf687b2d-16df-4973-a938-10a916e32626", "address": "fa:16:3e:ba:d1:ae", "network": {"id": "9ac533b5-90af-4cbe-be32-55de197d993c", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-596806407-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "f299da1b635f4dafbe62328983ad1fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf687b2d-16", "ovs_interfaceid": "cf687b2d-16df-4973-a938-10a916e32626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.525 280808 DEBUG oslo_concurrency.lockutils [req-f4e7e41c-8b55-4df3-a232-0aa8c4da2154 req-bfa35ed8-7955-4a9e-953a-5d4282765b72 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Releasing lock "refresh_cache-a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:52:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.564 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 04:52:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1586058909' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.622 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.657 280808 DEBUG nova.storage.rbd_utils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] rbd image a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:52:02 localhost nova_compute[280804]: 2026-02-20 09:52:02.663 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.061 280808 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.061 280808 INFO nova.compute.manager [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] VM Stopped (Lifecycle Event)#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.079 280808 DEBUG nova.compute.manager [None req-11d7cdc7-f785-4640-a9f1-059aea6a8258 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:52:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 04:52:03 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1024349497' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.257 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.259 280808 DEBUG nova.virt.libvirt.vif [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-20T09:51:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005625202.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=9,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP4HBmWTWAwNEADbjiDJf5tHgCRRZfcLPQLXoSs7XwsKBY2v7NIhgGC8mT1mpiGgORLTbH5q51ek5apsne3pw/Cm7opBG83ikRGVRTteWTD2fM2x4Io+fOZCjbL0t9ZRSg==',key_name='tempest-keypair-1784941554',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005625202.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005625202.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299da1b635f4dafbe62328983ad1fae',ramdisk_id='',reservation_id='r-o5tv2an3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1083555691',owner_user_name='tempest-ServersV294TestFqdnHostnames-1083555691-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-20T09:51:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2188e6de9cae445dadfba1541701ebd2',uuid=a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf687b2d-16df-4973-a938-10a916e32626", "address": "fa:16:3e:ba:d1:ae", "network": {"id": "9ac533b5-90af-4cbe-be32-55de197d993c", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-596806407-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "f299da1b635f4dafbe62328983ad1fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf687b2d-16", "ovs_interfaceid": "cf687b2d-16df-4973-a938-10a916e32626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.260 280808 DEBUG nova.network.os_vif_util [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Converting VIF {"id": "cf687b2d-16df-4973-a938-10a916e32626", "address": "fa:16:3e:ba:d1:ae", "network": {"id": "9ac533b5-90af-4cbe-be32-55de197d993c", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-596806407-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "f299da1b635f4dafbe62328983ad1fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf687b2d-16", "ovs_interfaceid": "cf687b2d-16df-4973-a938-10a916e32626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.261 280808 DEBUG nova.network.os_vif_util [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:d1:ae,bridge_name='br-int',has_traffic_filtering=True,id=cf687b2d-16df-4973-a938-10a916e32626,network=Network(9ac533b5-90af-4cbe-be32-55de197d993c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf687b2d-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.264 280808 DEBUG nova.objects.instance [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lazy-loading 'pci_devices' on Instance uuid a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.289 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] End _get_guest_xml xml= Feb 20 04:52:03 localhost nova_compute[280804]: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854 Feb 20 04:52:03 localhost nova_compute[280804]: instance-00000009 Feb 20 04:52:03 localhost nova_compute[280804]: 131072 Feb 20 04:52:03 localhost nova_compute[280804]: 1 Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: guest-instance-1 Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:02 Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: 128 Feb 20 04:52:03 localhost nova_compute[280804]: 1 Feb 20 04:52:03 localhost nova_compute[280804]: 0 Feb 20 04:52:03 localhost nova_compute[280804]: 0 Feb 20 04:52:03 localhost nova_compute[280804]: 1 Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: tempest-ServersV294TestFqdnHostnames-1083555691-project-member Feb 20 04:52:03 localhost nova_compute[280804]: tempest-ServersV294TestFqdnHostnames-1083555691 Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: RDO Feb 20 04:52:03 localhost nova_compute[280804]: OpenStack Compute Feb 20 04:52:03 localhost nova_compute[280804]: 27.5.2-0.20260127144738.eaa65f0.el9 Feb 20 04:52:03 localhost nova_compute[280804]: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854 Feb 20 04:52:03 localhost nova_compute[280804]: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854 Feb 20 04:52:03 localhost nova_compute[280804]: Virtual Machine Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: hvm Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: /dev/urandom Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: Feb 20 04:52:03 localhost nova_compute[280804]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.290 280808 DEBUG nova.compute.manager [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Preparing to wait for external event network-vif-plugged-cf687b2d-16df-4973-a938-10a916e32626 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.290 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Acquiring lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.291 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.292 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.294 280808 DEBUG nova.virt.libvirt.vif [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-20T09:51:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005625202.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=9,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP4HBmWTWAwNEADbjiDJf5tHgCRRZfcLPQLXoSs7XwsKBY2v7NIhgGC8mT1mpiGgORLTbH5q51ek5apsne3pw/Cm7opBG83ikRGVRTteWTD2fM2x4Io+fOZCjbL0t9ZRSg==',key_name='tempest-keypair-1784941554',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005625202.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005625202.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299da1b635f4dafbe62328983ad1fae',ramdisk_id='',reservation_id='r-o5tv2an3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1083555691',owner_user_name='tempest-ServersV294TestFqdnHostnames-1083555691-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-20T09:51:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2188e6de9cae445dadfba1541701ebd2',uuid=a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf687b2d-16df-4973-a938-10a916e32626", "address": "fa:16:3e:ba:d1:ae", "network": {"id": "9ac533b5-90af-4cbe-be32-55de197d993c", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-596806407-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "f299da1b635f4dafbe62328983ad1fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf687b2d-16", "ovs_interfaceid": "cf687b2d-16df-4973-a938-10a916e32626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.294 280808 DEBUG nova.network.os_vif_util [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Converting VIF {"id": "cf687b2d-16df-4973-a938-10a916e32626", "address": "fa:16:3e:ba:d1:ae", "network": {"id": "9ac533b5-90af-4cbe-be32-55de197d993c", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-596806407-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "f299da1b635f4dafbe62328983ad1fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf687b2d-16", "ovs_interfaceid": "cf687b2d-16df-4973-a938-10a916e32626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.296 280808 DEBUG nova.network.os_vif_util [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:d1:ae,bridge_name='br-int',has_traffic_filtering=True,id=cf687b2d-16df-4973-a938-10a916e32626,network=Network(9ac533b5-90af-4cbe-be32-55de197d993c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf687b2d-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.296 280808 DEBUG os_vif [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:d1:ae,bridge_name='br-int',has_traffic_filtering=True,id=cf687b2d-16df-4973-a938-10a916e32626,network=Network(9ac533b5-90af-4cbe-be32-55de197d993c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf687b2d-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.298 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.298 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.299 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.304 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.305 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf687b2d-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.305 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf687b2d-16, col_values=(('external_ids', {'iface-id': 'cf687b2d-16df-4973-a938-10a916e32626', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:d1:ae', 'vm-uuid': 'a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.310 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.315 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.317 280808 INFO os_vif [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:d1:ae,bridge_name='br-int',has_traffic_filtering=True,id=cf687b2d-16df-4973-a938-10a916e32626,network=Network(9ac533b5-90af-4cbe-be32-55de197d993c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf687b2d-16')#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.373 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.373 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.374 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] No VIF found with MAC fa:16:3e:ba:d1:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.375 280808 INFO nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Using config drive#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.410 280808 DEBUG nova.storage.rbd_utils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] rbd image a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:52:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v129: 177 pgs: 177 active+clean; 273 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 5.1 MiB/s rd, 6.8 MiB/s wr, 250 op/s Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.555 280808 INFO nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Creating config drive at /var/lib/nova/instances/a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854/disk.config#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.563 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpe7y5bwmd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:52:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e107 do_prune osdmap full prune enabled Feb 20 04:52:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e108 e108: 6 total, 6 up, 6 in Feb 20 04:52:03 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e108: 6 total, 6 up, 6 in Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.695 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpe7y5bwmd" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.739 280808 DEBUG nova.storage.rbd_utils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] rbd image a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.744 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854/disk.config a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:52:03 localhost nova_compute[280804]: 2026-02-20 09:52:03.955 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:04 localhost podman[312200]: 2026-02-20 09:52:04.176171871 +0000 UTC m=+0.057116035 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:52:04 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 6 addresses Feb 20 04:52:04 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:04 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:04 localhost sshd[312217]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:52:04 localhost nova_compute[280804]: 2026-02-20 09:52:04.778 280808 DEBUG oslo_concurrency.processutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854/disk.config a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:52:04 localhost nova_compute[280804]: 2026-02-20 09:52:04.779 280808 INFO nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Deleting local config drive /var/lib/nova/instances/a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854/disk.config because it was imported into RBD.#033[00m Feb 20 04:52:04 localhost kernel: device tapcf687b2d-16 entered promiscuous mode Feb 20 04:52:04 localhost ovn_controller[155916]: 2026-02-20T09:52:04Z|00111|binding|INFO|Claiming lport cf687b2d-16df-4973-a938-10a916e32626 for this chassis. Feb 20 04:52:04 localhost ovn_controller[155916]: 2026-02-20T09:52:04Z|00112|binding|INFO|cf687b2d-16df-4973-a938-10a916e32626: Claiming fa:16:3e:ba:d1:ae 10.100.0.13 Feb 20 04:52:04 localhost NetworkManager[5967]: [1771581124.8351] manager: (tapcf687b2d-16): new Tun device (/org/freedesktop/NetworkManager/Devices/26) Feb 20 04:52:04 localhost nova_compute[280804]: 2026-02-20 09:52:04.833 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:04 localhost systemd-udevd[312232]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:52:04 localhost NetworkManager[5967]: [1771581124.8479] device (tapcf687b2d-16): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 20 04:52:04 localhost NetworkManager[5967]: [1771581124.8541] device (tapcf687b2d-16): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.851 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:d1:ae 10.100.0.13'], port_security=['fa:16:3e:ba:d1:ae 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ac533b5-90af-4cbe-be32-55de197d993c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299da1b635f4dafbe62328983ad1fae', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4439e19b-bf91-4420-aff1-6854f961fef4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3caacf22-c5be-43ca-a327-69c0016b52bc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=cf687b2d-16df-4973-a938-10a916e32626) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.853 161766 INFO neutron.agent.ovn.metadata.agent [-] Port cf687b2d-16df-4973-a938-10a916e32626 in datapath 9ac533b5-90af-4cbe-be32-55de197d993c bound to our chassis#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.857 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port 0d374b77-62f8-4586-b079-4fa2c2f3165f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.858 161766 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ac533b5-90af-4cbe-be32-55de197d993c#033[00m Feb 20 04:52:04 localhost systemd-machined[205856]: New machine qemu-5-instance-00000009. Feb 20 04:52:04 localhost nova_compute[280804]: 2026-02-20 09:52:04.866 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.867 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[5d31198a-1432-4a1d-b8d2-df2a4533165b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.867 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ac533b5-91 in ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.871 263903 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ac533b5-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.871 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[b3609078-0b66-4dc4-98dc-d397f885cf95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.872 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2c808f-1e6e-49d1-8855-22f797d83baa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:04 localhost nova_compute[280804]: 2026-02-20 09:52:04.872 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:04 localhost systemd[1]: Started Virtual Machine qemu-5-instance-00000009. Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.883 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[477f7380-7149-4be4-b3a8-0220610fadda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:04 localhost ovn_controller[155916]: 2026-02-20T09:52:04Z|00113|binding|INFO|Setting lport cf687b2d-16df-4973-a938-10a916e32626 ovn-installed in OVS Feb 20 04:52:04 localhost ovn_controller[155916]: 2026-02-20T09:52:04Z|00114|binding|INFO|Setting lport cf687b2d-16df-4973-a938-10a916e32626 up in Southbound Feb 20 04:52:04 localhost nova_compute[280804]: 2026-02-20 09:52:04.889 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:04 localhost nova_compute[280804]: 2026-02-20 09:52:04.893 280808 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:52:04 localhost nova_compute[280804]: 2026-02-20 09:52:04.893 280808 INFO nova.compute.manager [-] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] VM Stopped (Lifecycle Event)#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.896 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[3d2a248a-a185-4877-aa24-839862ac0755]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.920 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4fdd7a-49d3-40d6-a687-ddbbdf0361b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.926 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[d675c0b7-c255-464b-bcf3-e1c14a1bf316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:04 localhost NetworkManager[5967]: [1771581124.9275] manager: (tap9ac533b5-90): new Veth device (/org/freedesktop/NetworkManager/Devices/27) Feb 20 04:52:04 localhost systemd-udevd[312237]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:52:04 localhost nova_compute[280804]: 2026-02-20 09:52:04.951 280808 DEBUG nova.compute.manager [None req-108e3f8a-6f50-4e78-840b-f2cc990260fe - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.957 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[1a311d7a-66f6-4589-8360-0760895cdd71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.961 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[95241b78-77c8-4b79-a386-5e257db3e164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:04 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap9ac533b5-91: link becomes ready Feb 20 04:52:04 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap9ac533b5-90: link becomes ready Feb 20 04:52:04 localhost NetworkManager[5967]: [1771581124.9805] device (tap9ac533b5-90): carrier: link connected Feb 20 04:52:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:04.985 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[53b5c85d-96bc-4cf7-adc7-d55e53bf02c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.006 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe07824-9fb5-46d9-abdf-6971a48af1ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ac533b5-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:cb:bd:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1167116, 'reachable_time': 40033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312269, 'error': None, 'target': 'ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.022 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[1573d304-759b-44bc-8bc6-fd41a6796e9e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:bd25'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1167116, 'tstamp': 1167116}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312286, 'error': None, 'target': 'ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.044 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[4752f97f-e4c1-43f8-bed8-cf29992051c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ac533b5-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:cb:bd:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1167116, 'reachable_time': 40033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312290, 'error': None, 'target': 'ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.074 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[87edac2f-0801-4a2c-9e9b-df77759f5092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.134 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[777402f0-99e6-44ab-bd27-c5c825c49cd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.137 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ac533b5-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.137 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.138 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ac533b5-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:52:05 localhost kernel: device tap9ac533b5-90 entered promiscuous mode Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.142 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.145 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ac533b5-90, col_values=(('external_ids', {'iface-id': '5e11c139-d549-4c68-b7a3-f8aaa8dc6cd2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.146 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:05 localhost ovn_controller[155916]: 2026-02-20T09:52:05Z|00115|binding|INFO|Releasing lport 5e11c139-d549-4c68-b7a3-f8aaa8dc6cd2 from this chassis (sb_readonly=0) Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.150 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.150 161766 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ac533b5-90af-4cbe-be32-55de197d993c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ac533b5-90af-4cbe-be32-55de197d993c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.151 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[fac2add7-2f33-475d-9542-18e1ae8f973d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.152 161766 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: global Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: log /dev/log local0 debug Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: log-tag haproxy-metadata-proxy-9ac533b5-90af-4cbe-be32-55de197d993c Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: user root Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: group root Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: maxconn 1024 Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: pidfile /var/lib/neutron/external/pids/9ac533b5-90af-4cbe-be32-55de197d993c.pid.haproxy Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: daemon Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: defaults Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: log global Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: mode http Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: option httplog Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: option dontlognull Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: option http-server-close Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: option forwardfor Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: retries 3 Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: timeout http-request 30s Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: timeout connect 30s Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: timeout client 32s Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: timeout server 32s Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: timeout http-keep-alive 30s Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: listen listener Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: bind 169.254.169.254:80 Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: server metadata /var/lib/neutron/metadata_proxy Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: http-request add-header X-OVN-Network-ID 9ac533b5-90af-4cbe-be32-55de197d993c Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.154 161766 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c', 'env', 'PROCESS_TAG=haproxy-9ac533b5-90af-4cbe-be32-55de197d993c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ac533b5-90af-4cbe-be32-55de197d993c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.161 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.190 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.191 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] VM Started (Lifecycle Event)#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.214 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.243 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.247 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.247 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] VM Paused (Lifecycle Event)#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.395 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.398 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 20 04:52:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v131: 177 pgs: 177 active+clean; 354 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.2 MiB/s rd, 8.5 MiB/s wr, 282 op/s Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.443 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 20 04:52:05 localhost podman[312347]: Feb 20 04:52:05 localhost podman[312347]: 2026-02-20 09:52:05.552271894 +0000 UTC m=+0.087692617 container create adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:52:05 localhost systemd[1]: Started libpod-conmon-adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45.scope. Feb 20 04:52:05 localhost podman[312347]: 2026-02-20 09:52:05.509478134 +0000 UTC m=+0.044898947 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 20 04:52:05 localhost systemd[1]: Started libcrun container. Feb 20 04:52:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d3e322bf523db0891bc4b815f416b05efc6923f8b7f1ef8019916453cc3e7c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:52:05 localhost podman[312347]: 2026-02-20 09:52:05.631874643 +0000 UTC m=+0.167295366 container init adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true) Feb 20 04:52:05 localhost podman[312347]: 2026-02-20 09:52:05.643230768 +0000 UTC m=+0.178651481 container start adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Feb 20 04:52:05 localhost neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c[312362]: [NOTICE] (312366) : New worker (312368) forked Feb 20 04:52:05 localhost neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c[312362]: [NOTICE] (312366) : Loading success. Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.888 280808 DEBUG nova.compute.manager [req-c7c109da-fcba-4493-9f0b-8d801ea23358 req-e96a7bb1-b9c9-4ee9-8bf1-6b149b079093 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Received event network-vif-plugged-cf687b2d-16df-4973-a938-10a916e32626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.889 280808 DEBUG oslo_concurrency.lockutils [req-c7c109da-fcba-4493-9f0b-8d801ea23358 req-e96a7bb1-b9c9-4ee9-8bf1-6b149b079093 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.890 280808 DEBUG oslo_concurrency.lockutils [req-c7c109da-fcba-4493-9f0b-8d801ea23358 req-e96a7bb1-b9c9-4ee9-8bf1-6b149b079093 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.890 280808 DEBUG oslo_concurrency.lockutils [req-c7c109da-fcba-4493-9f0b-8d801ea23358 req-e96a7bb1-b9c9-4ee9-8bf1-6b149b079093 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.891 280808 DEBUG nova.compute.manager [req-c7c109da-fcba-4493-9f0b-8d801ea23358 req-e96a7bb1-b9c9-4ee9-8bf1-6b149b079093 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Processing event network-vif-plugged-cf687b2d-16df-4973-a938-10a916e32626 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.892 280808 DEBUG nova.compute.manager [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.896 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.896 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] VM Resumed (Lifecycle Event)#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.898 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.902 280808 INFO nova.virt.libvirt.driver [-] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Instance spawned successfully.#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.903 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.918 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.918 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:52:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:05.919 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.931 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.938 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.943 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.944 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.944 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.945 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.946 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.946 280808 DEBUG nova.virt.libvirt.driver [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:52:05 localhost nova_compute[280804]: 2026-02-20 09:52:05.977 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 20 04:52:06 localhost nova_compute[280804]: 2026-02-20 09:52:06.022 280808 INFO nova.compute.manager [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Took 11.58 seconds to spawn the instance on the hypervisor.#033[00m Feb 20 04:52:06 localhost nova_compute[280804]: 2026-02-20 09:52:06.023 280808 DEBUG nova.compute.manager [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:52:06 localhost nova_compute[280804]: 2026-02-20 09:52:06.244 280808 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:52:06 localhost nova_compute[280804]: 2026-02-20 09:52:06.245 280808 INFO nova.compute.manager [-] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] VM Stopped (Lifecycle Event)#033[00m Feb 20 04:52:06 localhost nova_compute[280804]: 2026-02-20 09:52:06.286 280808 DEBUG nova.compute.manager [None req-244ef746-f4d3-4faf-9fdd-4205d4df4aed - - - - - -] [instance: 90eb8d1f-8d13-4395-9d15-67fdaa60632d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:52:06 localhost nova_compute[280804]: 2026-02-20 09:52:06.311 280808 INFO nova.compute.manager [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Took 12.82 seconds to build instance.#033[00m Feb 20 04:52:06 localhost nova_compute[280804]: 2026-02-20 09:52:06.330 280808 DEBUG oslo_concurrency.lockutils [None req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 12.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:52:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 177 active+clean; 354 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 6.9 MiB/s wr, 228 op/s Feb 20 04:52:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:52:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e108 do_prune osdmap full prune enabled Feb 20 04:52:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e109 e109: 6 total, 6 up, 6 in Feb 20 04:52:07 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e109: 6 total, 6 up, 6 in Feb 20 04:52:08 localhost nova_compute[280804]: 2026-02-20 09:52:08.342 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:52:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:52:08 localhost podman[312378]: 2026-02-20 09:52:08.426111509 +0000 UTC m=+0.068084530 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 20 04:52:08 localhost nova_compute[280804]: 2026-02-20 09:52:08.432 280808 DEBUG nova.compute.manager [req-d1779003-6985-4875-8f20-ffe72639af7e req-6b57ad3b-667c-429c-8c56-08042440e2e0 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Received event network-vif-plugged-cf687b2d-16df-4973-a938-10a916e32626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:52:08 localhost nova_compute[280804]: 2026-02-20 09:52:08.432 280808 DEBUG oslo_concurrency.lockutils [req-d1779003-6985-4875-8f20-ffe72639af7e req-6b57ad3b-667c-429c-8c56-08042440e2e0 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:52:08 localhost nova_compute[280804]: 2026-02-20 09:52:08.433 280808 DEBUG oslo_concurrency.lockutils [req-d1779003-6985-4875-8f20-ffe72639af7e req-6b57ad3b-667c-429c-8c56-08042440e2e0 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:52:08 localhost nova_compute[280804]: 2026-02-20 09:52:08.433 280808 DEBUG oslo_concurrency.lockutils [req-d1779003-6985-4875-8f20-ffe72639af7e req-6b57ad3b-667c-429c-8c56-08042440e2e0 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:52:08 localhost nova_compute[280804]: 2026-02-20 09:52:08.433 280808 DEBUG nova.compute.manager [req-d1779003-6985-4875-8f20-ffe72639af7e req-6b57ad3b-667c-429c-8c56-08042440e2e0 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] No waiting events found dispatching network-vif-plugged-cf687b2d-16df-4973-a938-10a916e32626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:52:08 localhost nova_compute[280804]: 2026-02-20 09:52:08.433 280808 WARNING nova.compute.manager [req-d1779003-6985-4875-8f20-ffe72639af7e req-6b57ad3b-667c-429c-8c56-08042440e2e0 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Received unexpected event network-vif-plugged-cf687b2d-16df-4973-a938-10a916e32626 for instance with vm_state active and task_state None.#033[00m Feb 20 04:52:08 localhost podman[312378]: 2026-02-20 09:52:08.462595379 +0000 UTC m=+0.104568360 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:52:08 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:52:08 localhost systemd[1]: tmp-crun.NBqru5.mount: Deactivated successfully. Feb 20 04:52:08 localhost podman[312377]: 2026-02-20 09:52:08.536126885 +0000 UTC m=+0.177609793 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 20 04:52:08 localhost podman[312377]: 2026-02-20 09:52:08.577531087 +0000 UTC m=+0.219013985 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller) Feb 20 04:52:08 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:52:08 localhost nova_compute[280804]: 2026-02-20 09:52:08.958 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 282 MiB data, 1017 MiB used, 41 GiB / 42 GiB avail; 8.2 MiB/s rd, 5.9 MiB/s wr, 254 op/s Feb 20 04:52:09 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:09.645 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:08Z, description=, device_id=3b959844-90d2-486b-9e34-b9eff25d51c3, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b14e272c-cf56-41fa-ad6c-94c945f22d35, ip_allocation=immediate, mac_address=fa:16:3e:3a:c8:1b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=746, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:52:09Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:52:09 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 7 addresses Feb 20 04:52:09 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:09 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:09 localhost podman[312435]: 2026-02-20 09:52:09.882795047 +0000 UTC m=+0.058948815 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2) Feb 20 04:52:10 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:10.070 263745 INFO neutron.agent.dhcp.agent [None req-eb6803d4-8da8-44b4-bf16-b3f7cb47d4f8 - - - - - -] DHCP configuration for ports {'b14e272c-cf56-41fa-ad6c-94c945f22d35'} is completed#033[00m Feb 20 04:52:10 localhost nova_compute[280804]: 2026-02-20 09:52:10.510 280808 DEBUG nova.compute.manager [req-c5e55ee0-35a3-41af-b095-406577601818 req-94f253ae-53f3-45a2-bb0f-124765bfa9f1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Received event network-changed-cf687b2d-16df-4973-a938-10a916e32626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:52:10 localhost nova_compute[280804]: 2026-02-20 09:52:10.510 280808 DEBUG nova.compute.manager [req-c5e55ee0-35a3-41af-b095-406577601818 req-94f253ae-53f3-45a2-bb0f-124765bfa9f1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Refreshing instance network info cache due to event network-changed-cf687b2d-16df-4973-a938-10a916e32626. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Feb 20 04:52:10 localhost nova_compute[280804]: 2026-02-20 09:52:10.511 280808 DEBUG oslo_concurrency.lockutils [req-c5e55ee0-35a3-41af-b095-406577601818 req-94f253ae-53f3-45a2-bb0f-124765bfa9f1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "refresh_cache-a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:52:10 localhost nova_compute[280804]: 2026-02-20 09:52:10.511 280808 DEBUG oslo_concurrency.lockutils [req-c5e55ee0-35a3-41af-b095-406577601818 req-94f253ae-53f3-45a2-bb0f-124765bfa9f1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquired lock "refresh_cache-a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:52:10 localhost nova_compute[280804]: 2026-02-20 09:52:10.511 280808 DEBUG nova.network.neutron [req-c5e55ee0-35a3-41af-b095-406577601818 req-94f253ae-53f3-45a2-bb0f-124765bfa9f1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Refreshing network info cache for port cf687b2d-16df-4973-a938-10a916e32626 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Feb 20 04:52:11 localhost podman[312471]: 2026-02-20 09:52:11.105839478 +0000 UTC m=+0.060957159 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 20 04:52:11 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 6 addresses Feb 20 04:52:11 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:11 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v135: 177 pgs: 177 active+clean; 192 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 5.9 MiB/s wr, 376 op/s Feb 20 04:52:11 localhost ovn_controller[155916]: 2026-02-20T09:52:11Z|00116|binding|INFO|Releasing lport 5e11c139-d549-4c68-b7a3-f8aaa8dc6cd2 from this chassis (sb_readonly=0) Feb 20 04:52:11 localhost nova_compute[280804]: 2026-02-20 09:52:11.782 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:11 localhost nova_compute[280804]: 2026-02-20 09:52:11.917 280808 DEBUG nova.network.neutron [req-c5e55ee0-35a3-41af-b095-406577601818 req-94f253ae-53f3-45a2-bb0f-124765bfa9f1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Updated VIF entry in instance network info cache for port cf687b2d-16df-4973-a938-10a916e32626. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Feb 20 04:52:11 localhost nova_compute[280804]: 2026-02-20 09:52:11.918 280808 DEBUG nova.network.neutron [req-c5e55ee0-35a3-41af-b095-406577601818 req-94f253ae-53f3-45a2-bb0f-124765bfa9f1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Updating instance_info_cache with network_info: [{"id": "cf687b2d-16df-4973-a938-10a916e32626", "address": "fa:16:3e:ba:d1:ae", "network": {"id": "9ac533b5-90af-4cbe-be32-55de197d993c", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-596806407-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "f299da1b635f4dafbe62328983ad1fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf687b2d-16", "ovs_interfaceid": "cf687b2d-16df-4973-a938-10a916e32626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:52:11 localhost nova_compute[280804]: 2026-02-20 09:52:11.949 280808 DEBUG oslo_concurrency.lockutils [req-c5e55ee0-35a3-41af-b095-406577601818 req-94f253ae-53f3-45a2-bb0f-124765bfa9f1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Releasing lock "refresh_cache-a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:52:11 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 5 addresses Feb 20 04:52:11 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:11 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:11 localhost podman[312508]: 2026-02-20 09:52:11.962123785 +0000 UTC m=+0.068113912 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:52:12 localhost ovn_controller[155916]: 2026-02-20T09:52:12Z|00117|binding|INFO|Releasing lport 5e11c139-d549-4c68-b7a3-f8aaa8dc6cd2 from this chassis (sb_readonly=0) Feb 20 04:52:12 localhost nova_compute[280804]: 2026-02-20 09:52:12.049 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:52:12 localhost systemd[1]: tmp-crun.J4epdP.mount: Deactivated successfully. Feb 20 04:52:12 localhost podman[312529]: 2026-02-20 09:52:12.44689011 +0000 UTC m=+0.082312773 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:52:12 localhost podman[312529]: 2026-02-20 09:52:12.463763493 +0000 UTC m=+0.099186176 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:52:12 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:52:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:52:13 localhost nova_compute[280804]: 2026-02-20 09:52:13.345 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 177 active+clean; 192 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 9.2 MiB/s rd, 4.8 MiB/s wr, 308 op/s Feb 20 04:52:13 localhost nova_compute[280804]: 2026-02-20 09:52:13.962 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:13 localhost systemd[1]: tmp-crun.rpvbBi.mount: Deactivated successfully. Feb 20 04:52:13 localhost dnsmasq[308558]: read /var/lib/neutron/dhcp/2d38d28f-6e3b-40d7-8d0c-e95c89b81845/addn_hosts - 0 addresses Feb 20 04:52:13 localhost dnsmasq-dhcp[308558]: read /var/lib/neutron/dhcp/2d38d28f-6e3b-40d7-8d0c-e95c89b81845/host Feb 20 04:52:13 localhost dnsmasq-dhcp[308558]: read /var/lib/neutron/dhcp/2d38d28f-6e3b-40d7-8d0c-e95c89b81845/opts Feb 20 04:52:13 localhost podman[312569]: 2026-02-20 09:52:13.992091086 +0000 UTC m=+0.074258676 container kill c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d38d28f-6e3b-40d7-8d0c-e95c89b81845, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:52:14 localhost ovn_controller[155916]: 2026-02-20T09:52:14Z|00118|binding|INFO|Releasing lport 9b59fcac-1972-4a35-9126-1aeb8964e5f2 from this chassis (sb_readonly=0) Feb 20 04:52:14 localhost kernel: device tap9b59fcac-19 left promiscuous mode Feb 20 04:52:14 localhost ovn_controller[155916]: 2026-02-20T09:52:14Z|00119|binding|INFO|Setting lport 9b59fcac-1972-4a35-9126-1aeb8964e5f2 down in Southbound Feb 20 04:52:14 localhost nova_compute[280804]: 2026-02-20 09:52:14.154 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:14 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:14.161 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-2d38d28f-6e3b-40d7-8d0c-e95c89b81845', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d38d28f-6e3b-40d7-8d0c-e95c89b81845', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5605ba7cb0df4223b48ebf8a1894cdf1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09397b48-122f-4b17-963b-b7ec7e0b12d2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9b59fcac-1972-4a35-9126-1aeb8964e5f2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:52:14 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:14.163 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 9b59fcac-1972-4a35-9126-1aeb8964e5f2 in datapath 2d38d28f-6e3b-40d7-8d0c-e95c89b81845 unbound from our chassis#033[00m Feb 20 04:52:14 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:14.166 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d38d28f-6e3b-40d7-8d0c-e95c89b81845, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:52:14 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:14.167 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[5ddebb4a-7f27-473a-9014-51172d65433d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:14 localhost nova_compute[280804]: 2026-02-20 09:52:14.179 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:14 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:14.449 263745 INFO neutron.agent.linux.ip_lib [None req-43b78610-306d-4249-947e-942688b7cc9b - - - - - -] Device tap1aca63f6-ed cannot be used as it has no MAC address#033[00m Feb 20 04:52:14 localhost nova_compute[280804]: 2026-02-20 09:52:14.472 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:14 localhost kernel: device tap1aca63f6-ed entered promiscuous mode Feb 20 04:52:14 localhost nova_compute[280804]: 2026-02-20 09:52:14.480 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:14 localhost ovn_controller[155916]: 2026-02-20T09:52:14Z|00120|binding|INFO|Claiming lport 1aca63f6-edee-4855-ab97-ed3bc1e3df9a for this chassis. Feb 20 04:52:14 localhost ovn_controller[155916]: 2026-02-20T09:52:14Z|00121|binding|INFO|1aca63f6-edee-4855-ab97-ed3bc1e3df9a: Claiming unknown Feb 20 04:52:14 localhost NetworkManager[5967]: [1771581134.4820] manager: (tap1aca63f6-ed): new Generic device (/org/freedesktop/NetworkManager/Devices/28) Feb 20 04:52:14 localhost systemd-udevd[312600]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:52:14 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:14.496 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-a5c47ade-696f-4e2c-8179-96ce73f49dec', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5c47ade-696f-4e2c-8179-96ce73f49dec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c87ad0e253048e48d34b168f9948627', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9c00034-b9e4-43bd-bd73-30e95a5c178f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1aca63f6-edee-4855-ab97-ed3bc1e3df9a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:52:14 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:14.497 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 1aca63f6-edee-4855-ab97-ed3bc1e3df9a in datapath a5c47ade-696f-4e2c-8179-96ce73f49dec bound to our chassis#033[00m Feb 20 04:52:14 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:14.499 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a5c47ade-696f-4e2c-8179-96ce73f49dec or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:52:14 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:14.500 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[df28ce2b-7a7c-41f5-bb82-a30fe1a4a8e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:14 localhost journal[229367]: ethtool ioctl error on tap1aca63f6-ed: No such device Feb 20 04:52:14 localhost ovn_controller[155916]: 2026-02-20T09:52:14Z|00122|binding|INFO|Setting lport 1aca63f6-edee-4855-ab97-ed3bc1e3df9a ovn-installed in OVS Feb 20 04:52:14 localhost ovn_controller[155916]: 2026-02-20T09:52:14Z|00123|binding|INFO|Setting lport 1aca63f6-edee-4855-ab97-ed3bc1e3df9a up in Southbound Feb 20 04:52:14 localhost nova_compute[280804]: 2026-02-20 09:52:14.521 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:14 localhost nova_compute[280804]: 2026-02-20 09:52:14.522 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:14 localhost journal[229367]: ethtool ioctl error on tap1aca63f6-ed: No such device Feb 20 04:52:14 localhost journal[229367]: ethtool ioctl error on tap1aca63f6-ed: No such device Feb 20 04:52:14 localhost journal[229367]: ethtool ioctl error on tap1aca63f6-ed: No such device Feb 20 04:52:14 localhost journal[229367]: ethtool ioctl error on tap1aca63f6-ed: No such device Feb 20 04:52:14 localhost journal[229367]: ethtool ioctl error on tap1aca63f6-ed: No such device Feb 20 04:52:14 localhost journal[229367]: ethtool ioctl error on tap1aca63f6-ed: No such device Feb 20 04:52:14 localhost journal[229367]: ethtool ioctl error on tap1aca63f6-ed: No such device Feb 20 04:52:14 localhost nova_compute[280804]: 2026-02-20 09:52:14.590 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:15 localhost podman[312671]: Feb 20 04:52:15 localhost podman[312671]: 2026-02-20 09:52:15.343337302 +0000 UTC m=+0.072524150 container create 0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5c47ade-696f-4e2c-8179-96ce73f49dec, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true) Feb 20 04:52:15 localhost systemd[1]: Started libpod-conmon-0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231.scope. Feb 20 04:52:15 localhost systemd[1]: Started libcrun container. Feb 20 04:52:15 localhost podman[312671]: 2026-02-20 09:52:15.299766751 +0000 UTC m=+0.028953639 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:52:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33fedda68f3d39f93a355a67990afb24eac485c7744cc79ed6bf06eb3597811/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:52:15 localhost podman[312671]: 2026-02-20 09:52:15.407594689 +0000 UTC m=+0.136781527 container init 0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5c47ade-696f-4e2c-8179-96ce73f49dec, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127) Feb 20 04:52:15 localhost podman[312671]: 2026-02-20 09:52:15.417423702 +0000 UTC m=+0.146610540 container start 0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5c47ade-696f-4e2c-8179-96ce73f49dec, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:52:15 localhost dnsmasq[312689]: started, version 2.85 cachesize 150 Feb 20 04:52:15 localhost dnsmasq[312689]: DNS service limited to local subnets Feb 20 04:52:15 localhost dnsmasq[312689]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:52:15 localhost dnsmasq[312689]: warning: no upstream servers configured Feb 20 04:52:15 localhost dnsmasq-dhcp[312689]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:52:15 localhost dnsmasq[312689]: read /var/lib/neutron/dhcp/a5c47ade-696f-4e2c-8179-96ce73f49dec/addn_hosts - 0 addresses Feb 20 04:52:15 localhost dnsmasq-dhcp[312689]: read /var/lib/neutron/dhcp/a5c47ade-696f-4e2c-8179-96ce73f49dec/host Feb 20 04:52:15 localhost dnsmasq-dhcp[312689]: read /var/lib/neutron/dhcp/a5c47ade-696f-4e2c-8179-96ce73f49dec/opts Feb 20 04:52:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v137: 177 pgs: 177 active+clean; 192 MiB data, 823 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 19 KiB/s wr, 168 op/s Feb 20 04:52:15 localhost sshd[312690]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:52:15 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:15.568 263745 INFO neutron.agent.dhcp.agent [None req-0f8ce0f6-3637-4169-a685-bef7f0a6235a - - - - - -] DHCP configuration for ports {'fa968331-65aa-41d3-af7c-ab2cd82cd6f8'} is completed#033[00m Feb 20 04:52:15 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:15.864 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:15Z, description=, device_id=b138a19b-f15e-43bb-81bd-163289a5b975, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bbf55c0b-19e7-4473-94cc-449853760761, ip_allocation=immediate, mac_address=fa:16:3e:aa:0e:a7, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=785, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:52:15Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:52:15 localhost neutron_sriov_agent[256551]: 2026-02-20 09:52:15.942 2 INFO neutron.agent.securitygroups_rpc [None req-343265ee-aef8-4c0b-8b69-d5c79e80995b ad3bee90b7c843958ab29e9ae5697cd5 78fdd34f107b4ec7ac81795ecc3f677c - - default default] Security group member updated ['7f2f6730-5897-423d-9b80-6a0cf94c3a8f']#033[00m Feb 20 04:52:16 localhost podman[312710]: 2026-02-20 09:52:16.076452969 +0000 UTC m=+0.054283869 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 20 04:52:16 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 6 addresses Feb 20 04:52:16 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:16 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:16 localhost podman[241347]: time="2026-02-20T09:52:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:52:16 localhost podman[241347]: @ - - [20/Feb/2026:09:52:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 164375 "" "Go-http-client/1.1" Feb 20 04:52:16 localhost podman[241347]: @ - - [20/Feb/2026:09:52:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20671 "" "Go-http-client/1.1" Feb 20 04:52:16 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:16.371 263745 INFO neutron.agent.dhcp.agent [None req-731d406b-4d79-4af7-9bff-1dec5809ccdd - - - - - -] DHCP configuration for ports {'bbf55c0b-19e7-4473-94cc-449853760761'} is completed#033[00m Feb 20 04:52:17 localhost ovn_controller[155916]: 2026-02-20T09:52:17Z|00124|binding|INFO|Releasing lport 5e11c139-d549-4c68-b7a3-f8aaa8dc6cd2 from this chassis (sb_readonly=0) Feb 20 04:52:17 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 5 addresses Feb 20 04:52:17 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:17 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:17 localhost podman[312747]: 2026-02-20 09:52:17.098498629 +0000 UTC m=+0.066836946 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:52:17 localhost nova_compute[280804]: 2026-02-20 09:52:17.120 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 192 MiB data, 823 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 19 KiB/s wr, 168 op/s Feb 20 04:52:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:52:17 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 4 addresses Feb 20 04:52:17 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:17 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:17 localhost podman[312784]: 2026-02-20 09:52:17.762004697 +0000 UTC m=+0.057265299 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:52:17 localhost nova_compute[280804]: 2026-02-20 09:52:17.839 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:17 localhost ovn_controller[155916]: 2026-02-20T09:52:17Z|00125|binding|INFO|Releasing lport 5e11c139-d549-4c68-b7a3-f8aaa8dc6cd2 from this chassis (sb_readonly=0) Feb 20 04:52:17 localhost nova_compute[280804]: 2026-02-20 09:52:17.949 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:18 localhost ovn_controller[155916]: 2026-02-20T09:52:18Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:d1:ae 10.100.0.13 Feb 20 04:52:18 localhost ovn_controller[155916]: 2026-02-20T09:52:18Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:d1:ae 10.100.0.13 Feb 20 04:52:18 localhost nova_compute[280804]: 2026-02-20 09:52:18.389 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:18 localhost podman[312822]: 2026-02-20 09:52:18.848632712 +0000 UTC m=+0.059908791 container kill c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d38d28f-6e3b-40d7-8d0c-e95c89b81845, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:52:18 localhost dnsmasq[308558]: exiting on receipt of SIGTERM Feb 20 04:52:18 localhost systemd[1]: libpod-c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab.scope: Deactivated successfully. Feb 20 04:52:18 localhost podman[312834]: 2026-02-20 09:52:18.912496658 +0000 UTC m=+0.051391852 container died c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d38d28f-6e3b-40d7-8d0c-e95c89b81845, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 20 04:52:18 localhost podman[312834]: 2026-02-20 09:52:18.951525787 +0000 UTC m=+0.090420941 container cleanup c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d38d28f-6e3b-40d7-8d0c-e95c89b81845, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:52:18 localhost systemd[1]: libpod-conmon-c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab.scope: Deactivated successfully. Feb 20 04:52:18 localhost nova_compute[280804]: 2026-02-20 09:52:18.964 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:18 localhost podman[312836]: 2026-02-20 09:52:18.995809546 +0000 UTC m=+0.125526533 container remove c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d38d28f-6e3b-40d7-8d0c-e95c89b81845, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true) Feb 20 04:52:19 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:19.275 263745 INFO neutron.agent.dhcp.agent [None req-255a6344-2c25-4318-8c2f-fe45eda2fa20 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:52:19 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:19.295 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:18Z, description=, device_id=b138a19b-f15e-43bb-81bd-163289a5b975, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6d5b3887-5c75-4ffd-a272-6b0a0d31fa0a, ip_allocation=immediate, mac_address=fa:16:3e:ab:83:1a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:52:12Z, description=, dns_domain=, id=a5c47ade-696f-4e2c-8179-96ce73f49dec, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-1614852814-network, port_security_enabled=True, project_id=3c87ad0e253048e48d34b168f9948627, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51513, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=767, status=ACTIVE, subnets=['fe00d3ab-ad0e-45f3-b386-e85a8de3129d'], tags=[], tenant_id=3c87ad0e253048e48d34b168f9948627, updated_at=2026-02-20T09:52:13Z, vlan_transparent=None, network_id=a5c47ade-696f-4e2c-8179-96ce73f49dec, port_security_enabled=False, project_id=3c87ad0e253048e48d34b168f9948627, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=793, status=DOWN, tags=[], tenant_id=3c87ad0e253048e48d34b168f9948627, updated_at=2026-02-20T09:52:18Z on network a5c47ade-696f-4e2c-8179-96ce73f49dec#033[00m Feb 20 04:52:19 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:19.347 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:52:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 177 active+clean; 220 MiB data, 864 MiB used, 41 GiB / 42 GiB avail; 3.3 MiB/s rd, 1.5 MiB/s wr, 183 op/s Feb 20 04:52:19 localhost neutron_sriov_agent[256551]: 2026-02-20 09:52:19.524 2 INFO neutron.agent.securitygroups_rpc [None req-e2d6b938-f6f5-4317-a8d2-0776bdf5afe2 ad3bee90b7c843958ab29e9ae5697cd5 78fdd34f107b4ec7ac81795ecc3f677c - - default default] Security group member updated ['7f2f6730-5897-423d-9b80-6a0cf94c3a8f']#033[00m Feb 20 04:52:19 localhost dnsmasq[312689]: read /var/lib/neutron/dhcp/a5c47ade-696f-4e2c-8179-96ce73f49dec/addn_hosts - 1 addresses Feb 20 04:52:19 localhost dnsmasq-dhcp[312689]: read /var/lib/neutron/dhcp/a5c47ade-696f-4e2c-8179-96ce73f49dec/host Feb 20 04:52:19 localhost dnsmasq-dhcp[312689]: read /var/lib/neutron/dhcp/a5c47ade-696f-4e2c-8179-96ce73f49dec/opts Feb 20 04:52:19 localhost podman[312879]: 2026-02-20 09:52:19.537550742 +0000 UTC m=+0.090743169 container kill 0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5c47ade-696f-4e2c-8179-96ce73f49dec, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:52:19 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:19.707 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:52:19 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:19.730 263745 INFO neutron.agent.dhcp.agent [None req-447382c9-51ee-4b10-a873-11f8009c5c33 - - - - - -] DHCP configuration for ports {'6d5b3887-5c75-4ffd-a272-6b0a0d31fa0a'} is completed#033[00m Feb 20 04:52:19 localhost systemd[1]: var-lib-containers-storage-overlay-f0e079002679090358ee2b7d0036a683ba73bf16dd2abaa467d159d20555eedd-merged.mount: Deactivated successfully. Feb 20 04:52:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0430f42a70aa6b9c8035a1c37becbc5d719d9da192673809f6e7a117fac24ab-userdata-shm.mount: Deactivated successfully. Feb 20 04:52:19 localhost systemd[1]: run-netns-qdhcp\x2d2d38d28f\x2d6e3b\x2d40d7\x2d8d0c\x2de95c89b81845.mount: Deactivated successfully. Feb 20 04:52:21 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:21.354 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:18Z, description=, device_id=b138a19b-f15e-43bb-81bd-163289a5b975, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6d5b3887-5c75-4ffd-a272-6b0a0d31fa0a, ip_allocation=immediate, mac_address=fa:16:3e:ab:83:1a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:52:12Z, description=, dns_domain=, id=a5c47ade-696f-4e2c-8179-96ce73f49dec, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-1614852814-network, port_security_enabled=True, project_id=3c87ad0e253048e48d34b168f9948627, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51513, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=767, status=ACTIVE, subnets=['fe00d3ab-ad0e-45f3-b386-e85a8de3129d'], tags=[], tenant_id=3c87ad0e253048e48d34b168f9948627, updated_at=2026-02-20T09:52:13Z, vlan_transparent=None, network_id=a5c47ade-696f-4e2c-8179-96ce73f49dec, port_security_enabled=False, project_id=3c87ad0e253048e48d34b168f9948627, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=793, status=DOWN, tags=[], tenant_id=3c87ad0e253048e48d34b168f9948627, updated_at=2026-02-20T09:52:18Z on network a5c47ade-696f-4e2c-8179-96ce73f49dec#033[00m Feb 20 04:52:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 151 op/s Feb 20 04:52:21 localhost dnsmasq[312689]: read /var/lib/neutron/dhcp/a5c47ade-696f-4e2c-8179-96ce73f49dec/addn_hosts - 1 addresses Feb 20 04:52:21 localhost dnsmasq-dhcp[312689]: read /var/lib/neutron/dhcp/a5c47ade-696f-4e2c-8179-96ce73f49dec/host Feb 20 04:52:21 localhost podman[312916]: 2026-02-20 09:52:21.580868842 +0000 UTC m=+0.061166265 container kill 0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5c47ade-696f-4e2c-8179-96ce73f49dec, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:52:21 localhost dnsmasq-dhcp[312689]: read /var/lib/neutron/dhcp/a5c47ade-696f-4e2c-8179-96ce73f49dec/opts Feb 20 04:52:21 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:21.789 263745 INFO neutron.agent.dhcp.agent [None req-cb17f50c-7827-48e9-bb89-f28c46b3cc02 - - - - - -] DHCP configuration for ports {'6d5b3887-5c75-4ffd-a272-6b0a0d31fa0a'} is completed#033[00m Feb 20 04:52:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:52:22 localhost podman[312938]: 2026-02-20 09:52:22.41966918 +0000 UTC m=+0.063734384 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:52:22 localhost podman[312938]: 2026-02-20 09:52:22.433900502 +0000 UTC m=+0.077965726 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:52:22 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:52:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:52:22 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:52:22 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:22 localhost podman[312979]: 2026-02-20 09:52:22.814148878 +0000 UTC m=+0.057375182 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:52:22 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:52:23 Feb 20 04:52:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:52:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 04:52:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['manila_data', '.mgr', 'manila_metadata', 'vms', 'volumes', 'backups', 'images'] Feb 20 04:52:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 04:52:23 localhost nova_compute[280804]: 2026-02-20 09:52:23.392 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 70 op/s Feb 20 04:52:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:52:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:52:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:52:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:52:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:52:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006571213271217197 of space, bias 1.0, pg target 1.3142426542434393 quantized to 32 (current 32) Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:52:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019465818676716918 quantized to 16 (current 16) Feb 20 04:52:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:52:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:52:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:52:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:52:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:52:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:52:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:52:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:52:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:52:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:52:23 localhost nova_compute[280804]: 2026-02-20 09:52:23.967 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:24 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:24.769 161888 DEBUG eventlet.wsgi.server [-] (161888) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Feb 20 04:52:24 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:24.771 161888 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015 Feb 20 04:52:24 localhost ovn_metadata_agent[161761]: Accept: */*#015 Feb 20 04:52:24 localhost ovn_metadata_agent[161761]: Connection: close#015 Feb 20 04:52:24 localhost ovn_metadata_agent[161761]: Content-Type: text/plain#015 Feb 20 04:52:24 localhost ovn_metadata_agent[161761]: Host: 169.254.169.254#015 Feb 20 04:52:24 localhost ovn_metadata_agent[161761]: User-Agent: curl/7.84.0#015 Feb 20 04:52:24 localhost ovn_metadata_agent[161761]: X-Forwarded-For: 10.100.0.13#015 Feb 20 04:52:24 localhost ovn_metadata_agent[161761]: X-Ovn-Network-Id: 9ac533b5-90af-4cbe-be32-55de197d993c __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Feb 20 04:52:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 177 active+clean; 225 MiB data, 890 MiB used, 41 GiB / 42 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 70 op/s Feb 20 04:52:25 localhost dnsmasq[312689]: read /var/lib/neutron/dhcp/a5c47ade-696f-4e2c-8179-96ce73f49dec/addn_hosts - 0 addresses Feb 20 04:52:25 localhost podman[313017]: 2026-02-20 09:52:25.687657064 +0000 UTC m=+0.052986095 container kill 0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5c47ade-696f-4e2c-8179-96ce73f49dec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:52:25 localhost dnsmasq-dhcp[312689]: read /var/lib/neutron/dhcp/a5c47ade-696f-4e2c-8179-96ce73f49dec/host Feb 20 04:52:25 localhost dnsmasq-dhcp[312689]: read /var/lib/neutron/dhcp/a5c47ade-696f-4e2c-8179-96ce73f49dec/opts Feb 20 04:52:25 localhost ovn_controller[155916]: 2026-02-20T09:52:25Z|00126|binding|INFO|Releasing lport 1aca63f6-edee-4855-ab97-ed3bc1e3df9a from this chassis (sb_readonly=0) Feb 20 04:52:25 localhost kernel: device tap1aca63f6-ed left promiscuous mode Feb 20 04:52:25 localhost nova_compute[280804]: 2026-02-20 09:52:25.849 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:25 localhost ovn_controller[155916]: 2026-02-20T09:52:25Z|00127|binding|INFO|Setting lport 1aca63f6-edee-4855-ab97-ed3bc1e3df9a down in Southbound Feb 20 04:52:25 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:25.866 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-a5c47ade-696f-4e2c-8179-96ce73f49dec', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5c47ade-696f-4e2c-8179-96ce73f49dec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c87ad0e253048e48d34b168f9948627', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9c00034-b9e4-43bd-bd73-30e95a5c178f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1aca63f6-edee-4855-ab97-ed3bc1e3df9a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:52:25 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:25.868 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 1aca63f6-edee-4855-ab97-ed3bc1e3df9a in datapath a5c47ade-696f-4e2c-8179-96ce73f49dec unbound from our chassis#033[00m Feb 20 04:52:25 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:25.871 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a5c47ade-696f-4e2c-8179-96ce73f49dec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:52:25 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:25.873 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[1175a7dd-314c-49aa-b7e9-86e7fecf063f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:25 localhost nova_compute[280804]: 2026-02-20 09:52:25.875 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:26 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:26.754 161888 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Feb 20 04:52:26 localhost haproxy-metadata-proxy-9ac533b5-90af-4cbe-be32-55de197d993c[312368]: 10.100.0.13:41864 [20/Feb/2026:09:52:24.768] listener listener/metadata 0/0/0/1986/1986 200 1657 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1" Feb 20 04:52:26 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:26.754 161888 INFO eventlet.wsgi.server [-] 10.100.0.13, "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200 len: 1673 time: 1.9837666#033[00m Feb 20 04:52:26 localhost nova_compute[280804]: 2026-02-20 09:52:26.902 280808 DEBUG oslo_concurrency.lockutils [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Acquiring lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:52:26 localhost nova_compute[280804]: 2026-02-20 09:52:26.902 280808 DEBUG oslo_concurrency.lockutils [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:52:26 localhost nova_compute[280804]: 2026-02-20 09:52:26.903 280808 DEBUG oslo_concurrency.lockutils [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Acquiring lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:52:26 localhost nova_compute[280804]: 2026-02-20 09:52:26.904 280808 DEBUG oslo_concurrency.lockutils [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:52:26 localhost nova_compute[280804]: 2026-02-20 09:52:26.904 280808 DEBUG oslo_concurrency.lockutils [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:52:26 localhost nova_compute[280804]: 2026-02-20 09:52:26.906 280808 INFO nova.compute.manager [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Terminating instance#033[00m Feb 20 04:52:26 localhost nova_compute[280804]: 2026-02-20 09:52:26.907 280808 DEBUG nova.compute.manager [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Feb 20 04:52:26 localhost kernel: device tapcf687b2d-16 left promiscuous mode Feb 20 04:52:26 localhost NetworkManager[5967]: [1771581146.9746] device (tapcf687b2d-16): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Feb 20 04:52:27 localhost ovn_controller[155916]: 2026-02-20T09:52:27Z|00128|binding|INFO|Releasing lport cf687b2d-16df-4973-a938-10a916e32626 from this chassis (sb_readonly=0) Feb 20 04:52:27 localhost ovn_controller[155916]: 2026-02-20T09:52:27Z|00129|binding|INFO|Setting lport cf687b2d-16df-4973-a938-10a916e32626 down in Southbound Feb 20 04:52:27 localhost ovn_controller[155916]: 2026-02-20T09:52:27Z|00130|binding|INFO|Removing iface tapcf687b2d-16 ovn-installed in OVS Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.024 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.029 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.037 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.046 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:d1:ae 10.100.0.13'], port_security=['fa:16:3e:ba:d1:ae 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ac533b5-90af-4cbe-be32-55de197d993c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299da1b635f4dafbe62328983ad1fae', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4439e19b-bf91-4420-aff1-6854f961fef4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3caacf22-c5be-43ca-a327-69c0016b52bc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=cf687b2d-16df-4973-a938-10a916e32626) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.048 161766 INFO neutron.agent.ovn.metadata.agent [-] Port cf687b2d-16df-4973-a938-10a916e32626 in datapath 9ac533b5-90af-4cbe-be32-55de197d993c unbound from our chassis#033[00m Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.051 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port 0d374b77-62f8-4586-b079-4fa2c2f3165f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.051 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ac533b5-90af-4cbe-be32-55de197d993c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:52:27 localhost systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Deactivated successfully. Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.052 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[83c023e7-0e1c-44ef-b009-11c77a6c45bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.053 161766 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c namespace which is not needed anymore#033[00m Feb 20 04:52:27 localhost systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Consumed 12.470s CPU time. Feb 20 04:52:27 localhost systemd-machined[205856]: Machine qemu-5-instance-00000009 terminated. Feb 20 04:52:27 localhost NetworkManager[5967]: [1771581147.1325] manager: (tapcf687b2d-16): new Tun device (/org/freedesktop/NetworkManager/Devices/29) Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.149 280808 INFO nova.virt.libvirt.driver [-] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Instance destroyed successfully.#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.150 280808 DEBUG nova.objects.instance [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lazy-loading 'resources' on Instance uuid a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.172 280808 DEBUG nova.virt.libvirt.vif [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-20T09:51:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005625202.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=9,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP4HBmWTWAwNEADbjiDJf5tHgCRRZfcLPQLXoSs7XwsKBY2v7NIhgGC8mT1mpiGgORLTbH5q51ek5apsne3pw/Cm7opBG83ikRGVRTteWTD2fM2x4Io+fOZCjbL0t9ZRSg==',key_name='tempest-keypair-1784941554',keypairs=,launch_index=0,launched_at=2026-02-20T09:52:06Z,launched_on='np0005625202.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005625202.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='f299da1b635f4dafbe62328983ad1fae',ramdisk_id='',reservation_id='r-o5tv2an3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-1083555691',owner_user_name='tempest-ServersV294TestFqdnHostnames-1083555691-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2026-02-20T09:52:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2188e6de9cae445dadfba1541701ebd2',uuid=a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf687b2d-16df-4973-a938-10a916e32626", "address": "fa:16:3e:ba:d1:ae", "network": {"id": "9ac533b5-90af-4cbe-be32-55de197d993c", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-596806407-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "f299da1b635f4dafbe62328983ad1fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf687b2d-16", "ovs_interfaceid": "cf687b2d-16df-4973-a938-10a916e32626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.172 280808 DEBUG nova.network.os_vif_util [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Converting VIF {"id": "cf687b2d-16df-4973-a938-10a916e32626", "address": "fa:16:3e:ba:d1:ae", "network": {"id": "9ac533b5-90af-4cbe-be32-55de197d993c", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-596806407-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "f299da1b635f4dafbe62328983ad1fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf687b2d-16", "ovs_interfaceid": "cf687b2d-16df-4973-a938-10a916e32626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.174 280808 DEBUG nova.network.os_vif_util [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:d1:ae,bridge_name='br-int',has_traffic_filtering=True,id=cf687b2d-16df-4973-a938-10a916e32626,network=Network(9ac533b5-90af-4cbe-be32-55de197d993c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf687b2d-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.174 280808 DEBUG os_vif [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:d1:ae,bridge_name='br-int',has_traffic_filtering=True,id=cf687b2d-16df-4973-a938-10a916e32626,network=Network(9ac533b5-90af-4cbe-be32-55de197d993c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf687b2d-16') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.176 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.177 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf687b2d-16, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.179 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.181 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.185 280808 INFO os_vif [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:d1:ae,bridge_name='br-int',has_traffic_filtering=True,id=cf687b2d-16df-4973-a938-10a916e32626,network=Network(9ac533b5-90af-4cbe-be32-55de197d993c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf687b2d-16')#033[00m Feb 20 04:52:27 localhost systemd[1]: tmp-crun.7eZeqM.mount: Deactivated successfully. Feb 20 04:52:27 localhost neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c[312362]: [NOTICE] (312366) : haproxy version is 2.8.14-c23fe91 Feb 20 04:52:27 localhost neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c[312362]: [NOTICE] (312366) : path to executable is /usr/sbin/haproxy Feb 20 04:52:27 localhost neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c[312362]: [WARNING] (312366) : Exiting Master process... Feb 20 04:52:27 localhost neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c[312362]: [ALERT] (312366) : Current worker (312368) exited with code 143 (Terminated) Feb 20 04:52:27 localhost neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c[312362]: [WARNING] (312366) : All workers exited. Exiting... (0) Feb 20 04:52:27 localhost systemd[1]: libpod-adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45.scope: Deactivated successfully. Feb 20 04:52:27 localhost podman[313075]: 2026-02-20 09:52:27.266000252 +0000 UTC m=+0.089941329 container died adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Feb 20 04:52:27 localhost podman[313075]: 2026-02-20 09:52:27.302063 +0000 UTC m=+0.126004037 container cleanup adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:52:27 localhost podman[313106]: 2026-02-20 09:52:27.346834763 +0000 UTC m=+0.073968908 container cleanup adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 20 04:52:27 localhost systemd[1]: libpod-conmon-adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45.scope: Deactivated successfully. Feb 20 04:52:27 localhost podman[313118]: 2026-02-20 09:52:27.392733127 +0000 UTC m=+0.065334467 container remove adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.398 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0fc4d8-a66d-4dd1-b7bf-c55e451533b1]: (4, ('Fri Feb 20 09:52:27 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c (adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45)\nadb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45\nFri Feb 20 09:52:27 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c (adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45)\nadb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.400 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[22f2c215-89c7-408a-9243-95115169230d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.401 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ac533b5-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.403 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:27 localhost kernel: device tap9ac533b5-90 left promiscuous mode Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.416 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.420 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe0d1a5-5f2e-46af-b65c-27b4cc6da6a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 225 MiB data, 890 MiB used, 41 GiB / 42 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 65 op/s Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.432 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[f7115060-e415-4d64-a850-46fd958ca20d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.434 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[a491bf71-0fb1-46fa-8ea4-8bec53041adb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.449 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[b2355389-449a-4698-8b85-ddd9ae0fc2fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1167110, 'reachable_time': 44416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313139, 'error': None, 'target': 'ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.452 161893 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ac533b5-90af-4cbe-be32-55de197d993c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 20 04:52:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:27.452 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[b7cb3ccb-0cc8-4728-aca0-c20ad1932a3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.501 280808 DEBUG nova.compute.manager [req-48ca5986-d4d5-4b0a-b00a-5fd2d61bd2a2 req-ed95a1d6-a3d7-445b-8e0f-9a34e518f064 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Received event network-vif-unplugged-cf687b2d-16df-4973-a938-10a916e32626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.502 280808 DEBUG oslo_concurrency.lockutils [req-48ca5986-d4d5-4b0a-b00a-5fd2d61bd2a2 req-ed95a1d6-a3d7-445b-8e0f-9a34e518f064 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.502 280808 DEBUG oslo_concurrency.lockutils [req-48ca5986-d4d5-4b0a-b00a-5fd2d61bd2a2 req-ed95a1d6-a3d7-445b-8e0f-9a34e518f064 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.503 280808 DEBUG oslo_concurrency.lockutils [req-48ca5986-d4d5-4b0a-b00a-5fd2d61bd2a2 req-ed95a1d6-a3d7-445b-8e0f-9a34e518f064 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.503 280808 DEBUG nova.compute.manager [req-48ca5986-d4d5-4b0a-b00a-5fd2d61bd2a2 req-ed95a1d6-a3d7-445b-8e0f-9a34e518f064 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] No waiting events found dispatching network-vif-unplugged-cf687b2d-16df-4973-a938-10a916e32626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.503 280808 DEBUG nova.compute.manager [req-48ca5986-d4d5-4b0a-b00a-5fd2d61bd2a2 req-ed95a1d6-a3d7-445b-8e0f-9a34e518f064 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Received event network-vif-unplugged-cf687b2d-16df-4973-a938-10a916e32626 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Feb 20 04:52:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:52:27 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:52:27 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:27 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:27 localhost podman[313158]: 2026-02-20 09:52:27.777048972 +0000 UTC m=+0.069710704 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.836 280808 INFO nova.virt.libvirt.driver [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Deleting instance files /var/lib/nova/instances/a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_del#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.837 280808 INFO nova.virt.libvirt.driver [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Deletion of /var/lib/nova/instances/a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854_del complete#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.903 280808 INFO nova.compute.manager [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.904 280808 DEBUG oslo.service.loopingcall [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.904 280808 DEBUG nova.compute.manager [-] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.905 280808 DEBUG nova.network.neutron [-] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Feb 20 04:52:27 localhost nova_compute[280804]: 2026-02-20 09:52:27.974 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:28 localhost openstack_network_exporter[243776]: ERROR 09:52:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:52:28 localhost openstack_network_exporter[243776]: Feb 20 04:52:28 localhost openstack_network_exporter[243776]: ERROR 09:52:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:52:28 localhost openstack_network_exporter[243776]: Feb 20 04:52:28 localhost systemd[1]: var-lib-containers-storage-overlay-46d3e322bf523db0891bc4b815f416b05efc6923f8b7f1ef8019916453cc3e7c-merged.mount: Deactivated successfully. Feb 20 04:52:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-adb69c9adde03cddcf25438dcfd78e4d17c7e8f39a9b3e8c05673b750db46e45-userdata-shm.mount: Deactivated successfully. Feb 20 04:52:28 localhost systemd[1]: run-netns-ovnmeta\x2d9ac533b5\x2d90af\x2d4cbe\x2dbe32\x2d55de197d993c.mount: Deactivated successfully. Feb 20 04:52:28 localhost podman[313196]: 2026-02-20 09:52:28.446634503 +0000 UTC m=+0.062595933 container kill 0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5c47ade-696f-4e2c-8179-96ce73f49dec, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 20 04:52:28 localhost dnsmasq[312689]: exiting on receipt of SIGTERM Feb 20 04:52:28 localhost systemd[1]: libpod-0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231.scope: Deactivated successfully. Feb 20 04:52:28 localhost podman[313210]: 2026-02-20 09:52:28.521062543 +0000 UTC m=+0.057867566 container died 0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5c47ade-696f-4e2c-8179-96ce73f49dec, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:52:28 localhost podman[313210]: 2026-02-20 09:52:28.547913184 +0000 UTC m=+0.084718177 container cleanup 0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5c47ade-696f-4e2c-8179-96ce73f49dec, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 20 04:52:28 localhost systemd[1]: libpod-conmon-0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231.scope: Deactivated successfully. Feb 20 04:52:28 localhost podman[313211]: 2026-02-20 09:52:28.604580826 +0000 UTC m=+0.135563263 container remove 0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5c47ade-696f-4e2c-8179-96ce73f49dec, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 20 04:52:28 localhost neutron_sriov_agent[256551]: 2026-02-20 09:52:28.830 2 INFO neutron.agent.securitygroups_rpc [req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 req-730cd1ba-0675-45ee-8c23-360f67ec8632 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Security group member updated ['4439e19b-bf91-4420-aff1-6854f961fef4']#033[00m Feb 20 04:52:28 localhost nova_compute[280804]: 2026-02-20 09:52:28.954 280808 DEBUG nova.network.neutron [-] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:52:28 localhost nova_compute[280804]: 2026-02-20 09:52:28.970 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:28 localhost nova_compute[280804]: 2026-02-20 09:52:28.977 280808 INFO nova.compute.manager [-] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Took 1.07 seconds to deallocate network for instance.#033[00m Feb 20 04:52:28 localhost nova_compute[280804]: 2026-02-20 09:52:28.989 280808 DEBUG nova.compute.manager [req-265a89a5-0f64-41fb-8fe7-d8829518699d req-3b603c39-8213-4e64-b0e8-fac18976bb95 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Received event network-vif-deleted-cf687b2d-16df-4973-a938-10a916e32626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.019 280808 DEBUG oslo_concurrency.lockutils [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.020 280808 DEBUG oslo_concurrency.lockutils [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:52:29 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:29.044 263745 INFO neutron.agent.dhcp.agent [None req-f0d2e2f4-51ab-47af-8d30-bce40d7308c0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:52:29 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:29.062 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.067 280808 DEBUG oslo_concurrency.processutils [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:52:29 localhost dnsmasq[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/addn_hosts - 1 addresses Feb 20 04:52:29 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/host Feb 20 04:52:29 localhost podman[313256]: 2026-02-20 09:52:29.114879218 +0000 UTC m=+0.062989514 container kill e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:52:29 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/opts Feb 20 04:52:29 localhost systemd[1]: var-lib-containers-storage-overlay-a33fedda68f3d39f93a355a67990afb24eac485c7744cc79ed6bf06eb3597811-merged.mount: Deactivated successfully. Feb 20 04:52:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0c3fb88bfeecacb2cc405bcf8d6eeab615cdc680c9380f92e4a38c72fceba231-userdata-shm.mount: Deactivated successfully. Feb 20 04:52:29 localhost systemd[1]: run-netns-qdhcp\x2da5c47ade\x2d696f\x2d4e2c\x2d8179\x2d96ce73f49dec.mount: Deactivated successfully. Feb 20 04:52:29 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:29.280 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:52:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 177 active+clean; 176 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 380 KiB/s rd, 2.1 MiB/s wr, 80 op/s Feb 20 04:52:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:52:29 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3541994609' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.503 280808 DEBUG oslo_concurrency.processutils [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.508 280808 DEBUG nova.compute.provider_tree [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.629 280808 DEBUG nova.compute.manager [req-8dad72f5-95bd-4ed3-bd5f-14aeb616d182 req-ac2ec6e5-43ab-4066-acb1-4fd77f44b652 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Received event network-vif-plugged-cf687b2d-16df-4973-a938-10a916e32626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.630 280808 DEBUG oslo_concurrency.lockutils [req-8dad72f5-95bd-4ed3-bd5f-14aeb616d182 req-ac2ec6e5-43ab-4066-acb1-4fd77f44b652 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.630 280808 DEBUG oslo_concurrency.lockutils [req-8dad72f5-95bd-4ed3-bd5f-14aeb616d182 req-ac2ec6e5-43ab-4066-acb1-4fd77f44b652 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.630 280808 DEBUG oslo_concurrency.lockutils [req-8dad72f5-95bd-4ed3-bd5f-14aeb616d182 req-ac2ec6e5-43ab-4066-acb1-4fd77f44b652 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.630 280808 DEBUG nova.compute.manager [req-8dad72f5-95bd-4ed3-bd5f-14aeb616d182 req-ac2ec6e5-43ab-4066-acb1-4fd77f44b652 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] No waiting events found dispatching network-vif-plugged-cf687b2d-16df-4973-a938-10a916e32626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.630 280808 WARNING nova.compute.manager [req-8dad72f5-95bd-4ed3-bd5f-14aeb616d182 req-ac2ec6e5-43ab-4066-acb1-4fd77f44b652 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Received unexpected event network-vif-plugged-cf687b2d-16df-4973-a938-10a916e32626 for instance with vm_state deleted and task_state None.#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.734 280808 DEBUG nova.scheduler.client.report [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.760 280808 DEBUG oslo_concurrency.lockutils [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.810 280808 INFO nova.scheduler.client.report [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Deleted allocations for instance a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854#033[00m Feb 20 04:52:29 localhost nova_compute[280804]: 2026-02-20 09:52:29.893 280808 DEBUG oslo_concurrency.lockutils [None req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Lock "a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 2.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:52:30 localhost sshd[313301]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:52:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 177 KiB/s rd, 660 KiB/s wr, 52 op/s Feb 20 04:52:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:52:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:52:31 localhost podman[313304]: 2026-02-20 09:52:31.892906547 +0000 UTC m=+0.087449730 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:52:31 localhost podman[313304]: 2026-02-20 09:52:31.903177863 +0000 UTC m=+0.097721026 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:52:31 localhost podman[313303]: 2026-02-20 09:52:31.953164057 +0000 UTC m=+0.148287186 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, build-date=2026-02-05T04:57:10Z, distribution-scope=public, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, release=1770267347, vcs-type=git) Feb 20 04:52:31 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:52:31 localhost podman[313303]: 2026-02-20 09:52:31.998751581 +0000 UTC m=+0.193874710 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, architecture=x86_64) Feb 20 04:52:32 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:52:32 localhost podman[313357]: 2026-02-20 09:52:32.076628334 +0000 UTC m=+0.055142053 container kill e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:52:32 localhost dnsmasq[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/addn_hosts - 0 addresses Feb 20 04:52:32 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/host Feb 20 04:52:32 localhost dnsmasq-dhcp[310417]: read /var/lib/neutron/dhcp/9ac533b5-90af-4cbe-be32-55de197d993c/opts Feb 20 04:52:32 localhost nova_compute[280804]: 2026-02-20 09:52:32.179 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:32 localhost nova_compute[280804]: 2026-02-20 09:52:32.239 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:32 localhost ovn_controller[155916]: 2026-02-20T09:52:32Z|00131|binding|INFO|Releasing lport fbed669d-f70a-4531-87e3-8e1d261d93fd from this chassis (sb_readonly=0) Feb 20 04:52:32 localhost ovn_controller[155916]: 2026-02-20T09:52:32Z|00132|binding|INFO|Setting lport fbed669d-f70a-4531-87e3-8e1d261d93fd down in Southbound Feb 20 04:52:32 localhost kernel: device tapfbed669d-f7 left promiscuous mode Feb 20 04:52:32 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:32.256 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-9ac533b5-90af-4cbe-be32-55de197d993c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ac533b5-90af-4cbe-be32-55de197d993c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299da1b635f4dafbe62328983ad1fae', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3caacf22-c5be-43ca-a327-69c0016b52bc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fbed669d-f70a-4531-87e3-8e1d261d93fd) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:52:32 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:32.258 161766 INFO neutron.agent.ovn.metadata.agent [-] Port fbed669d-f70a-4531-87e3-8e1d261d93fd in datapath 9ac533b5-90af-4cbe-be32-55de197d993c unbound from our chassis#033[00m Feb 20 04:52:32 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:32.260 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ac533b5-90af-4cbe-be32-55de197d993c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:52:32 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:32.261 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[0697bcc6-b0b6-4c44-a67d-f0e3ae893262]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:52:32 localhost nova_compute[280804]: 2026-02-20 09:52:32.263 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:52:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s Feb 20 04:52:33 localhost sshd[313381]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:52:33 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:52:33 localhost podman[313400]: 2026-02-20 09:52:33.828506773 +0000 UTC m=+0.060334222 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Feb 20 04:52:33 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:33 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:33 localhost nova_compute[280804]: 2026-02-20 09:52:33.893 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:33 localhost nova_compute[280804]: 2026-02-20 09:52:33.972 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:34 localhost sshd[313439]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:52:34 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:34.429 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:34Z, description=, device_id=3095f6e8-d4c1-4b47-b904-07c6a9deaaf2, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9f328f18-72c1-4686-9f28-9acbc2d8de67, ip_allocation=immediate, mac_address=fa:16:3e:12:8f:fa, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=850, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:52:34Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:52:34 localhost podman[313440]: 2026-02-20 09:52:34.48197555 +0000 UTC m=+0.057944548 container kill e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127) Feb 20 04:52:34 localhost dnsmasq[310417]: exiting on receipt of SIGTERM Feb 20 04:52:34 localhost systemd[1]: tmp-crun.xgOozT.mount: Deactivated successfully. Feb 20 04:52:34 localhost systemd[1]: libpod-e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589.scope: Deactivated successfully. Feb 20 04:52:34 localhost podman[313466]: 2026-02-20 09:52:34.547546782 +0000 UTC m=+0.048480383 container died e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 20 04:52:34 localhost podman[313466]: 2026-02-20 09:52:34.658346088 +0000 UTC m=+0.159279599 container cleanup e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Feb 20 04:52:34 localhost systemd[1]: libpod-conmon-e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589.scope: Deactivated successfully. Feb 20 04:52:34 localhost podman[313465]: 2026-02-20 09:52:34.677547044 +0000 UTC m=+0.174001496 container remove e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9ac533b5-90af-4cbe-be32-55de197d993c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 20 04:52:34 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:34.701 263745 INFO neutron.agent.dhcp.agent [None req-699216b0-185d-41ac-be67-a3381c78cf34 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:52:34 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:52:34 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:34 localhost podman[313495]: 2026-02-20 09:52:34.745812329 +0000 UTC m=+0.117324763 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 20 04:52:34 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:34 localhost systemd[1]: var-lib-containers-storage-overlay-869a1298b3fee34374f704128979138a71e2b6c6e81d0ace36f301dc5362c878-merged.mount: Deactivated successfully. Feb 20 04:52:34 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4091699967ae4c5cb9d4215db5146e98c25e7d41f2925027f941e8170be7589-userdata-shm.mount: Deactivated successfully. Feb 20 04:52:34 localhost systemd[1]: run-netns-qdhcp\x2d9ac533b5\x2d90af\x2d4cbe\x2dbe32\x2d55de197d993c.mount: Deactivated successfully. Feb 20 04:52:34 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:34.832 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:52:35 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:35.024 263745 INFO neutron.agent.dhcp.agent [None req-21950bb3-71ae-4a82-886a-86ab74a322c7 - - - - - -] DHCP configuration for ports {'9f328f18-72c1-4686-9f28-9acbc2d8de67'} is completed#033[00m Feb 20 04:52:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s Feb 20 04:52:35 localhost nova_compute[280804]: 2026-02-20 09:52:35.732 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:52:35 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:52:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:52:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:52:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:52:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:52:35 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev eb780364-86ad-4ce5-8690-9336fba23be3 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:52:35 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev eb780364-86ad-4ce5-8690-9336fba23be3 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:52:35 localhost ceph-mgr[286565]: [progress INFO root] Completed event eb780364-86ad-4ce5-8690-9336fba23be3 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:52:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:52:35 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:52:35 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:52:35 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:52:36 localhost neutron_sriov_agent[256551]: 2026-02-20 09:52:36.830 2 INFO neutron.agent.securitygroups_rpc [None req-d63b0875-b3f1-4849-b165-16313644e666 eab28fccca6a48139a7d8b395d8f0b9a dc182b0a7cbb4e47b6b88befc2c48022 - - default default] Security group member updated ['8d0cb685-1e0f-43aa-973a-a081d9962496']#033[00m Feb 20 04:52:37 localhost nova_compute[280804]: 2026-02-20 09:52:37.181 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:37 localhost neutron_sriov_agent[256551]: 2026-02-20 09:52:37.391 2 INFO neutron.agent.securitygroups_rpc [None req-51ac2042-ec94-4975-95ca-42a72712c92b eab28fccca6a48139a7d8b395d8f0b9a dc182b0a7cbb4e47b6b88befc2c48022 - - default default] Security group member updated ['8d0cb685-1e0f-43aa-973a-a081d9962496']#033[00m Feb 20 04:52:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s Feb 20 04:52:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:52:38 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:52:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:52:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:52:38 localhost nova_compute[280804]: 2026-02-20 09:52:38.947 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:38 localhost nova_compute[280804]: 2026-02-20 09:52:38.977 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:38 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:52:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:52:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:52:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s Feb 20 04:52:39 localhost podman[313603]: 2026-02-20 09:52:39.4495715 +0000 UTC m=+0.087386129 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 20 04:52:39 localhost podman[313603]: 2026-02-20 09:52:39.491912807 +0000 UTC m=+0.129727446 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Feb 20 04:52:39 localhost podman[313604]: 2026-02-20 09:52:39.499928043 +0000 UTC m=+0.134717851 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:52:39 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:52:39 localhost podman[313604]: 2026-02-20 09:52:39.509067768 +0000 UTC m=+0.143857566 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true) Feb 20 04:52:39 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:52:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 8.8 KiB/s rd, 341 B/s wr, 12 op/s Feb 20 04:52:41 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:41.877 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:41Z, description=, device_id=d472f0b4-01df-4346-9239-5246395c8051, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7458fe4f-98fb-4838-b02c-666d226eaa7e, ip_allocation=immediate, mac_address=fa:16:3e:d1:e2:2c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=926, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:52:41Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:52:42 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:52:42 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:42 localhost podman[313662]: 2026-02-20 09:52:42.086219262 +0000 UTC m=+0.058149074 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 04:52:42 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:42 localhost nova_compute[280804]: 2026-02-20 09:52:42.179 280808 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:52:42 localhost nova_compute[280804]: 2026-02-20 09:52:42.179 280808 INFO nova.compute.manager [-] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] VM Stopped (Lifecycle Event)#033[00m Feb 20 04:52:42 localhost nova_compute[280804]: 2026-02-20 09:52:42.182 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:42 localhost nova_compute[280804]: 2026-02-20 09:52:42.198 280808 DEBUG nova.compute.manager [None req-9460beec-5540-4210-91ed-89d5be9c9451 - - - - - -] [instance: a89c7dc3-af8d-43eb-9b8b-1ab6b90c8854] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:52:42 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:42.306 263745 INFO neutron.agent.dhcp.agent [None req-819505ac-eb58-4516-be8e-17aadecc4aec - - - - - -] DHCP configuration for ports {'7458fe4f-98fb-4838-b02c-666d226eaa7e'} is completed#033[00m Feb 20 04:52:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0. Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.591131) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43 Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581162591179, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1030, "num_deletes": 253, "total_data_size": 912062, "memory_usage": 931000, "flush_reason": "Manual Compaction"} Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581162597374, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 646394, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24993, "largest_seqno": 26022, "table_properties": {"data_size": 642460, "index_size": 1597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10892, "raw_average_key_size": 21, "raw_value_size": 633788, "raw_average_value_size": 1245, "num_data_blocks": 71, "num_entries": 509, "num_filter_entries": 509, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581095, "oldest_key_time": 1771581095, "file_creation_time": 1771581162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}} Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 6309 microseconds, and 3389 cpu microseconds. Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.597445) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 646394 bytes OK Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.597466) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.601339) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.601361) EVENT_LOG_v1 {"time_micros": 1771581162601355, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.601411) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 907155, prev total WAL file size 907479, number of live WAL files 2. Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.601994) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373536' seq:72057594037927935, type:22 .. '6D6772737461740034303037' seq:0, type:0; will stop at (end) Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(631KB)], [42(18MB)] Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581162602046, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 19536765, "oldest_snapshot_seqno": -1} Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 12298 keys, 17598526 bytes, temperature: kUnknown Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581162689547, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 17598526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17530103, "index_size": 36578, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30789, "raw_key_size": 328417, "raw_average_key_size": 26, "raw_value_size": 17322420, "raw_average_value_size": 1408, "num_data_blocks": 1390, "num_entries": 12298, "num_filter_entries": 12298, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771581162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.689901) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 17598526 bytes Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.691565) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.0 rd, 200.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 18.0 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(57.4) write-amplify(27.2) OK, records in: 12797, records dropped: 499 output_compression: NoCompression Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.691600) EVENT_LOG_v1 {"time_micros": 1771581162691585, "job": 24, "event": "compaction_finished", "compaction_time_micros": 87613, "compaction_time_cpu_micros": 50896, "output_level": 6, "num_output_files": 1, "total_output_size": 17598526, "num_input_records": 12797, "num_output_records": 12298, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581162691823, "job": 24, "event": "table_file_deletion", "file_number": 44} Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581162694317, "job": 24, "event": "table_file_deletion", "file_number": 42} Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.601944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.694436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.694443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.694447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.694450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:52:42 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:52:42.694453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:52:43 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:43.257 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:42Z, description=, device_id=b4b9739b-06f1-49e7-bd64-0538f2a164aa, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=772621ce-0f53-406a-9305-57265302eedb, ip_allocation=immediate, mac_address=fa:16:3e:e7:0c:30, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=946, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:52:42Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:52:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:52:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:52:43 localhost systemd[1]: tmp-crun.oEAb4F.mount: Deactivated successfully. Feb 20 04:52:43 localhost podman[313696]: 2026-02-20 09:52:43.456554291 +0000 UTC m=+0.088297033 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:52:43 localhost podman[313696]: 2026-02-20 09:52:43.464227067 +0000 UTC m=+0.095969649 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:52:43 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:52:43 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 4 addresses Feb 20 04:52:43 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:43 localhost podman[313710]: 2026-02-20 09:52:43.528241648 +0000 UTC m=+0.118660810 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Feb 20 04:52:43 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:43 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:43.659 263745 INFO neutron.agent.dhcp.agent [None req-b4139ba3-6888-40b2-b77a-1f59a3ab3abf - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:43Z, description=, device_id=e9eaf7e2-a744-4430-90a3-738e36e65f14, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3f8ae037-bc46-4f67-9c8c-d0acf5297e2c, ip_allocation=immediate, mac_address=fa:16:3e:30:8f:d3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=947, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:52:43Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:52:43 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:43.779 263745 INFO neutron.agent.dhcp.agent [None req-8f2c1dbc-9a5b-4f9c-abd0-6fc5dda9f678 - - - - - -] DHCP configuration for ports {'772621ce-0f53-406a-9305-57265302eedb'} is completed#033[00m Feb 20 04:52:43 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 5 addresses Feb 20 04:52:43 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:43 localhost podman[313759]: 2026-02-20 09:52:43.875476857 +0000 UTC m=+0.058399720 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:52:43 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:43 localhost nova_compute[280804]: 2026-02-20 09:52:43.979 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:44 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:44.137 263745 INFO neutron.agent.dhcp.agent [None req-839765b1-6aca-4923-95d2-2b7410d74162 - - - - - -] DHCP configuration for ports {'3f8ae037-bc46-4f67-9c8c-d0acf5297e2c'} is completed#033[00m Feb 20 04:52:44 localhost nova_compute[280804]: 2026-02-20 09:52:44.254 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:44 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 4 addresses Feb 20 04:52:44 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:44 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:44 localhost podman[313798]: 2026-02-20 09:52:44.958564057 +0000 UTC m=+0.059396236 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:52:45 localhost nova_compute[280804]: 2026-02-20 09:52:45.018 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e109 do_prune osdmap full prune enabled Feb 20 04:52:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e110 e110: 6 total, 6 up, 6 in Feb 20 04:52:45 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e110: 6 total, 6 up, 6 in Feb 20 04:52:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 3.8 KiB/s rd, 307 B/s wr, 5 op/s Feb 20 04:52:46 localhost podman[241347]: time="2026-02-20T09:52:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:52:46 localhost podman[241347]: @ - - [20/Feb/2026:09:52:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:52:46 localhost podman[241347]: @ - - [20/Feb/2026:09:52:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18776 "" "Go-http-client/1.1" Feb 20 04:52:46 localhost nova_compute[280804]: 2026-02-20 09:52:46.509 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e110 do_prune osdmap full prune enabled Feb 20 04:52:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e111 e111: 6 total, 6 up, 6 in Feb 20 04:52:47 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e111: 6 total, 6 up, 6 in Feb 20 04:52:47 localhost nova_compute[280804]: 2026-02-20 09:52:47.211 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 4.7 KiB/s rd, 383 B/s wr, 6 op/s Feb 20 04:52:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:52:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:52:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:52:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:52:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:52:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:52:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:52:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:52:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:52:47 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:52:47 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:47 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:47 localhost podman[313836]: 2026-02-20 09:52:47.641029259 +0000 UTC m=+0.054190857 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 04:52:47 localhost systemd[1]: tmp-crun.p39mvq.mount: Deactivated successfully. Feb 20 04:52:49 localhost nova_compute[280804]: 2026-02-20 09:52:49.012 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:49 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:49.141 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:48Z, description=, device_id=36357b64-9b56-40f6-8b07-7c45e664893d, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=58f08e3b-620f-4f8c-b531-14f5ca9d1bc2, ip_allocation=immediate, mac_address=fa:16:3e:8d:84:2a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=968, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:52:48Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:52:49 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 4 addresses Feb 20 04:52:49 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:49 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:49 localhost podman[313873]: 2026-02-20 09:52:49.383502937 +0000 UTC m=+0.066353655 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Feb 20 04:52:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.9 KiB/s wr, 20 op/s Feb 20 04:52:49 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:49.662 263745 INFO neutron.agent.dhcp.agent [None req-0861970d-9a1e-4605-897c-5c46e3e6b7e8 - - - - - -] DHCP configuration for ports {'58f08e3b-620f-4f8c-b531-14f5ca9d1bc2'} is completed#033[00m Feb 20 04:52:49 localhost nova_compute[280804]: 2026-02-20 09:52:49.934 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 4.1 KiB/s wr, 49 op/s Feb 20 04:52:51 localhost nova_compute[280804]: 2026-02-20 09:52:51.598 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:52 localhost nova_compute[280804]: 2026-02-20 09:52:52.127 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:52 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:52:52 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 3070 writes, 26K keys, 3070 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.08 MB/s#012Cumulative WAL: 3070 writes, 3070 syncs, 1.00 writes per sync, written: 0.05 GB, 0.08 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3070 writes, 26K keys, 3070 commit groups, 1.0 writes per commit group, ingest: 49.44 MB, 0.08 MB/s#012Interval WAL: 3070 writes, 3070 syncs, 1.00 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 154.5 0.24 0.09 12 0.020 0 0 0.0 0.0#012 L6 1/0 16.78 MB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 5.0 195.9 176.4 1.05 0.49 11 0.095 126K 5610 0.0 0.0#012 Sum 1/0 16.78 MB 0.0 0.2 0.0 0.2 0.2 0.1 0.0 6.0 159.2 172.3 1.29 0.58 23 0.056 126K 5610 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.2 0.0 0.2 0.2 0.1 0.0 6.0 159.6 172.6 1.28 0.58 22 0.058 126K 5610 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 0.0 195.9 176.4 1.05 0.49 11 0.095 126K 5610 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 156.2 0.24 0.09 11 0.022 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.036, interval 0.036#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.22 GB write, 0.37 MB/s write, 0.20 GB read, 0.34 MB/s read, 1.3 seconds#012Interval compaction: 0.22 GB write, 0.37 MB/s write, 0.20 GB read, 0.34 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a9b723b350#2 capacity: 308.00 MB usage: 48.35 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000467 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3152,47.49 MB,15.4187%) FilterBlock(23,376.55 KB,0.11939%) IndexBlock(23,504.89 KB,0.160084%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Feb 20 04:52:52 localhost nova_compute[280804]: 2026-02-20 09:52:52.213 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:52:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e111 do_prune osdmap full prune enabled Feb 20 04:52:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 e112: 6 total, 6 up, 6 in Feb 20 04:52:52 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e112: 6 total, 6 up, 6 in Feb 20 04:52:52 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:52.811 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:52Z, description=, device_id=3fedb7f0-7be1-4d3a-b896-a6321e1c2497, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c3a9e6d2-a1bd-4bf0-a8d1-4bb3b6e4a9a5, ip_allocation=immediate, mac_address=fa:16:3e:19:ba:46, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=996, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:52:52Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:52:53 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 5 addresses Feb 20 04:52:53 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:53 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:53 localhost podman[313910]: 2026-02-20 09:52:53.013642493 +0000 UTC m=+0.064642808 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:52:53 localhost systemd[1]: tmp-crun.ymnJfF.mount: Deactivated successfully. Feb 20 04:52:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:52:53 localhost systemd[1]: tmp-crun.bKJ7M9.mount: Deactivated successfully. Feb 20 04:52:53 localhost podman[313925]: 2026-02-20 09:52:53.119135956 +0000 UTC m=+0.077608775 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:52:53 localhost podman[313925]: 2026-02-20 09:52:53.128899649 +0000 UTC m=+0.087372488 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:52:53 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:52:53 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:53.222 263745 INFO neutron.agent.dhcp.agent [None req-0542b714-57fb-43af-b889-21dcf090d928 - - - - - -] DHCP configuration for ports {'c3a9e6d2-a1bd-4bf0-a8d1-4bb3b6e4a9a5'} is completed#033[00m Feb 20 04:52:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 3.7 KiB/s wr, 43 op/s Feb 20 04:52:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:52:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:52:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:52:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:52:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:52:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:52:54 localhost nova_compute[280804]: 2026-02-20 09:52:54.014 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:54 localhost nova_compute[280804]: 2026-02-20 09:52:54.175 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 3.6 KiB/s wr, 41 op/s Feb 20 04:52:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:56.052 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:52:56 localhost nova_compute[280804]: 2026-02-20 09:52:56.053 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:56.054 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 04:52:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:52:56.054 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:52:56 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 4 addresses Feb 20 04:52:56 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:56 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:56 localhost systemd[1]: tmp-crun.5yzaOy.mount: Deactivated successfully. Feb 20 04:52:56 localhost podman[313973]: 2026-02-20 09:52:56.285673848 +0000 UTC m=+0.060360684 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:52:56 localhost neutron_sriov_agent[256551]: 2026-02-20 09:52:56.312 2 INFO neutron.agent.securitygroups_rpc [None req-35b4089d-d96b-4223-91a7-29363be26031 9b5edcaf5d0f48eea2ef440e3b3c2f79 85741ccf160049968710bbf0d3ed7a21 - - default default] Security group member updated ['1f0747df-ad50-4106-9a56-f1b68b2201c8']#033[00m Feb 20 04:52:56 localhost nova_compute[280804]: 2026-02-20 09:52:56.422 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:56 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:56.431 263745 INFO neutron.agent.dhcp.agent [None req-36d1c1a7-deeb-4a10-bb10-bebfc69ea29f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:55Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=25be0680-01fe-4fa5-870a-cb4ad49cb142, ip_allocation=immediate, mac_address=fa:16:3e:00:17:1f, name=tempest-RoutersAdminNegativeTest-441529191, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=True, project_id=85741ccf160049968710bbf0d3ed7a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['1f0747df-ad50-4106-9a56-f1b68b2201c8'], standard_attr_id=1017, status=DOWN, tags=[], tenant_id=85741ccf160049968710bbf0d3ed7a21, updated_at=2026-02-20T09:52:55Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:52:56 localhost nova_compute[280804]: 2026-02-20 09:52:56.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:52:56 localhost nova_compute[280804]: 2026-02-20 09:52:56.512 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:52:56 localhost nova_compute[280804]: 2026-02-20 09:52:56.512 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:52:56 localhost nova_compute[280804]: 2026-02-20 09:52:56.526 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:52:56 localhost systemd[1]: tmp-crun.XMxZXl.mount: Deactivated successfully. Feb 20 04:52:56 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 5 addresses Feb 20 04:52:56 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:56 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:56 localhost podman[314011]: 2026-02-20 09:52:56.685198791 +0000 UTC m=+0.073527286 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:52:56 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:56.879 263745 INFO neutron.agent.dhcp.agent [None req-90acb037-fc8f-4a3d-be56-5041d2cfe92d - - - - - -] DHCP configuration for ports {'25be0680-01fe-4fa5-870a-cb4ad49cb142'} is completed#033[00m Feb 20 04:52:57 localhost nova_compute[280804]: 2026-02-20 09:52:57.216 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 3.0 KiB/s wr, 34 op/s Feb 20 04:52:57 localhost nova_compute[280804]: 2026-02-20 09:52:57.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:52:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:52:57 localhost systemd[1]: tmp-crun.DaqkNI.mount: Deactivated successfully. Feb 20 04:52:57 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 4 addresses Feb 20 04:52:57 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:57 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:57 localhost podman[314050]: 2026-02-20 09:52:57.638473175 +0000 UTC m=+0.069475548 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127) Feb 20 04:52:57 localhost nova_compute[280804]: 2026-02-20 09:52:57.695 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:57 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:52:57 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:57 localhost podman[314088]: 2026-02-20 09:52:57.983566536 +0000 UTC m=+0.052684386 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:52:57 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:58 localhost nova_compute[280804]: 2026-02-20 09:52:58.079 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:58 localhost openstack_network_exporter[243776]: ERROR 09:52:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:52:58 localhost openstack_network_exporter[243776]: Feb 20 04:52:58 localhost openstack_network_exporter[243776]: ERROR 09:52:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:52:58 localhost openstack_network_exporter[243776]: Feb 20 04:52:58 localhost nova_compute[280804]: 2026-02-20 09:52:58.352 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:58 localhost neutron_sriov_agent[256551]: 2026-02-20 09:52:58.360 2 INFO neutron.agent.securitygroups_rpc [None req-3bf8b391-96d6-4728-ae92-83d8f7b4ba3a 9b5edcaf5d0f48eea2ef440e3b3c2f79 85741ccf160049968710bbf0d3ed7a21 - - default default] Security group member updated ['1f0747df-ad50-4106-9a56-f1b68b2201c8']#033[00m Feb 20 04:52:58 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:52:58 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:58 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:58 localhost podman[314126]: 2026-02-20 09:52:58.382238238 +0000 UTC m=+0.066975371 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 20 04:52:58 localhost nova_compute[280804]: 2026-02-20 09:52:58.506 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:52:58 localhost nova_compute[280804]: 2026-02-20 09:52:58.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:52:58 localhost nova_compute[280804]: 2026-02-20 09:52:58.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:52:58 localhost systemd[1]: tmp-crun.cD0QXU.mount: Deactivated successfully. Feb 20 04:52:58 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:52:58 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:58 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:58 localhost podman[314167]: 2026-02-20 09:52:58.747668675 +0000 UTC m=+0.064046310 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:52:59 localhost nova_compute[280804]: 2026-02-20 09:52:59.017 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:52:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 1023 B/s wr, 23 op/s Feb 20 04:52:59 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:59.473 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:58Z, description=, device_id=720c373e-27b3-4469-829e-dd0b5b196891, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7cb4b0c9-511a-4d6b-a200-3c058723d018, ip_allocation=immediate, mac_address=fa:16:3e:00:04:a8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1041, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:52:58Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:52:59 localhost nova_compute[280804]: 2026-02-20 09:52:59.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:52:59 localhost nova_compute[280804]: 2026-02-20 09:52:59.541 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:52:59 localhost nova_compute[280804]: 2026-02-20 09:52:59.542 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:52:59 localhost nova_compute[280804]: 2026-02-20 09:52:59.542 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:52:59 localhost nova_compute[280804]: 2026-02-20 09:52:59.542 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:52:59 localhost nova_compute[280804]: 2026-02-20 09:52:59.543 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:52:59 localhost podman[314207]: 2026-02-20 09:52:59.69055937 +0000 UTC m=+0.062385567 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:52:59 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:52:59 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:52:59 localhost systemd[1]: tmp-crun.gDilD2.mount: Deactivated successfully. Feb 20 04:52:59 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:52:59 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:52:59.943 263745 INFO neutron.agent.dhcp.agent [None req-ceb44683-d54e-4ac2-b221-4b4dfc7d6eab - - - - - -] DHCP configuration for ports {'7cb4b0c9-511a-4d6b-a200-3c058723d018'} is completed#033[00m Feb 20 04:52:59 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:52:59 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4201398583' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.009 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.241 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.243 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11562MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.243 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.243 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.294 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.295 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.310 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:53:00 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:53:00 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3538498167' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.717 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.723 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.740 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.770 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:53:00 localhost nova_compute[280804]: 2026-02-20 09:53:00.771 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:53:01 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:01.311 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:00Z, description=, device_id=da5bdbc2-1507-4da5-9265-efc184d2b2bc, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=03aa4864-a843-40a2-8468-5307f702d7e0, ip_allocation=immediate, mac_address=fa:16:3e:20:6b:f8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1053, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:53:00Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:53:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:01 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:53:01 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:01 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:01 localhost podman[314288]: 2026-02-20 09:53:01.57943143 +0000 UTC m=+0.067986077 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:53:01 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:01.812 263745 INFO neutron.agent.dhcp.agent [None req-e306f9c4-5a65-40c1-9887-193b6ee527f4 - - - - - -] DHCP configuration for ports {'03aa4864-a843-40a2-8468-5307f702d7e0'} is completed#033[00m Feb 20 04:53:02 localhost nova_compute[280804]: 2026-02-20 09:53:02.202 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:02 localhost nova_compute[280804]: 2026-02-20 09:53:02.217 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:53:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:53:02 localhost podman[314310]: 2026-02-20 09:53:02.460863243 +0000 UTC m=+0.092838416 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vcs-type=git, release=1770267347, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, version=9.7) Feb 20 04:53:02 localhost podman[314310]: 2026-02-20 09:53:02.472902096 +0000 UTC m=+0.104877269 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, name=ubi9/ubi-minimal, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 20 04:53:02 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:53:02 localhost systemd[1]: tmp-crun.LXeMAZ.mount: Deactivated successfully. Feb 20 04:53:02 localhost podman[314311]: 2026-02-20 09:53:02.571043123 +0000 UTC m=+0.201419223 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute) Feb 20 04:53:02 localhost podman[314311]: 2026-02-20 09:53:02.58578411 +0000 UTC m=+0.216160210 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute) Feb 20 04:53:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:53:02 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:53:02 localhost nova_compute[280804]: 2026-02-20 09:53:02.771 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:53:02 localhost nova_compute[280804]: 2026-02-20 09:53:02.772 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:53:02 localhost nova_compute[280804]: 2026-02-20 09:53:02.772 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:53:02 localhost nova_compute[280804]: 2026-02-20 09:53:02.772 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:53:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:04 localhost nova_compute[280804]: 2026-02-20 09:53:04.020 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:04 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:53:04 localhost podman[314365]: 2026-02-20 09:53:04.173705693 +0000 UTC m=+0.059582992 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:53:04 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:04 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:05.919 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:53:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:05.919 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:53:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:05.919 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:53:06 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:06.110 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:05Z, description=, device_id=b82572b0-71d4-4115-8f21-7874b6921265, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f02a1c60-4da8-45db-86aa-d841e6853896, ip_allocation=immediate, mac_address=fa:16:3e:f9:64:19, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1081, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:53:05Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:53:06 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:53:06 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:06 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:06 localhost podman[314404]: 2026-02-20 09:53:06.326580978 +0000 UTC m=+0.062995935 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:53:06 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:06.571 263745 INFO neutron.agent.dhcp.agent [None req-2c164187-8ff7-4412-a4cd-edb9714d2cdf - - - - - -] DHCP configuration for ports {'f02a1c60-4da8-45db-86aa-d841e6853896'} is completed#033[00m Feb 20 04:53:07 localhost nova_compute[280804]: 2026-02-20 09:53:07.242 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:53:07 localhost nova_compute[280804]: 2026-02-20 09:53:07.664 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:09 localhost nova_compute[280804]: 2026-02-20 09:53:09.023 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:09 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:53:09 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:09 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:09 localhost podman[314441]: 2026-02-20 09:53:09.133898344 +0000 UTC m=+0.060930668 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:53:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:10 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:10.034 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:09Z, description=, device_id=80216888-3750-4b46-b56b-ef73e0057e5b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1664a27e-a3ac-46f8-bc68-8793c435ef4a, ip_allocation=immediate, mac_address=fa:16:3e:63:19:19, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1141, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:53:09Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:53:10 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:53:10 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:10 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:10 localhost podman[314480]: 2026-02-20 09:53:10.235554423 +0000 UTC m=+0.057133815 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:53:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:53:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:53:10 localhost podman[314496]: 2026-02-20 09:53:10.366980155 +0000 UTC m=+0.102941327 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:53:10 localhost podman[314496]: 2026-02-20 09:53:10.400884955 +0000 UTC m=+0.136846127 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 20 04:53:10 localhost podman[314494]: 2026-02-20 09:53:10.418665554 +0000 UTC m=+0.159214749 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 20 04:53:10 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:53:10 localhost podman[314494]: 2026-02-20 09:53:10.487865732 +0000 UTC m=+0.228414927 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.vendor=CentOS) Feb 20 04:53:10 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:53:11 localhost systemd[1]: tmp-crun.BJfqaT.mount: Deactivated successfully. Feb 20 04:53:11 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:11.310 263745 INFO neutron.agent.dhcp.agent [None req-ffb7ab6e-a9d9-47a4-811a-79113a537027 - - - - - -] DHCP configuration for ports {'1664a27e-a3ac-46f8-bc68-8793c435ef4a'} is completed#033[00m Feb 20 04:53:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v168: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:11 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:11.975 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:53:12 localhost nova_compute[280804]: 2026-02-20 09:53:12.272 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:53:12 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:53:12 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:12 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:12 localhost podman[314563]: 2026-02-20 09:53:12.992987461 +0000 UTC m=+0.067588197 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:53:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:14 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:14.026 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:13Z, description=, device_id=9b7c3dfe-fdd5-4786-8c82-569d2e57716e, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ee7b34ae-a2d9-416e-ad0e-c136e719ba98, ip_allocation=immediate, mac_address=fa:16:3e:05:b3:34, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1156, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:53:13Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:53:14 localhost nova_compute[280804]: 2026-02-20 09:53:14.053 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:14 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:53:14 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:14 localhost podman[314602]: 2026-02-20 09:53:14.261197315 +0000 UTC m=+0.065154601 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:53:14 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:53:14 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:14.374 2 INFO neutron.agent.securitygroups_rpc [None req-8db2cda0-f70c-405a-8e32-bbd09e8f7101 253c13edd6b940a9b5cd64cc7bfa0ff5 bae77758d77d4d43af7ac10744892742 - - default default] Security group member updated ['f7e09389-cc97-415e-b643-e34abd10c9b7']#033[00m Feb 20 04:53:14 localhost podman[314615]: 2026-02-20 09:53:14.39461081 +0000 UTC m=+0.107850799 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:53:14 localhost podman[314615]: 2026-02-20 09:53:14.405983805 +0000 UTC m=+0.119223794 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:53:14 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:53:14 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:14.451 2 INFO neutron.agent.securitygroups_rpc [None req-8db2cda0-f70c-405a-8e32-bbd09e8f7101 253c13edd6b940a9b5cd64cc7bfa0ff5 bae77758d77d4d43af7ac10744892742 - - default default] Security group member updated ['f7e09389-cc97-415e-b643-e34abd10c9b7']#033[00m Feb 20 04:53:14 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:14.525 263745 INFO neutron.agent.dhcp.agent [None req-30e070db-a8b0-4d3c-807a-da8a1ee86502 - - - - - -] DHCP configuration for ports {'ee7b34ae-a2d9-416e-ad0e-c136e719ba98'} is completed#033[00m Feb 20 04:53:14 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:14.940 2 INFO neutron.agent.securitygroups_rpc [None req-ea37899b-0895-4039-936c-a92dc4af71cc 253c13edd6b940a9b5cd64cc7bfa0ff5 bae77758d77d4d43af7ac10744892742 - - default default] Security group member updated ['f7e09389-cc97-415e-b643-e34abd10c9b7']#033[00m Feb 20 04:53:15 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:15.283 2 INFO neutron.agent.securitygroups_rpc [None req-6181ff08-47fa-4ccb-88a6-fd4810762b1a 253c13edd6b940a9b5cd64cc7bfa0ff5 bae77758d77d4d43af7ac10744892742 - - default default] Security group member updated ['f7e09389-cc97-415e-b643-e34abd10c9b7']#033[00m Feb 20 04:53:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:16 localhost podman[241347]: time="2026-02-20T09:53:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:53:16 localhost podman[241347]: @ - - [20/Feb/2026:09:53:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:53:16 localhost podman[241347]: @ - - [20/Feb/2026:09:53:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18784 "" "Go-http-client/1.1" Feb 20 04:53:17 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:53:17 localhost systemd[1]: tmp-crun.htnKja.mount: Deactivated successfully. Feb 20 04:53:17 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:17 localhost podman[314661]: 2026-02-20 09:53:17.231746868 +0000 UTC m=+0.072238902 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 20 04:53:17 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:53:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:53:17 localhost nova_compute[280804]: 2026-02-20 09:53:17.275 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:17 localhost sshd[314676]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:53:17 localhost nova_compute[280804]: 2026-02-20 09:53:17.449 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:53:17 localhost systemd[1]: tmp-crun.fsxRMf.mount: Deactivated successfully. Feb 20 04:53:17 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:53:17 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:17 localhost podman[314698]: 2026-02-20 09:53:17.651937218 +0000 UTC m=+0.080616367 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:53:17 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:18 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:18.186 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:17Z, description=, device_id=7ff6fbed-0a36-4fd0-9453-57f1c7995a81, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b323ae7e-60ff-4407-a655-89e7ef119179, ip_allocation=immediate, mac_address=fa:16:3e:1d:90:6d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1187, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:53:17Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:53:18 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:53:18 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:18 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:18 localhost podman[314737]: 2026-02-20 09:53:18.408190297 +0000 UTC m=+0.068824830 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:53:18 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:18.745 263745 INFO neutron.agent.dhcp.agent [None req-60ceb81a-3111-4216-ab6f-a26e8cdc19ce - - - - - -] DHCP configuration for ports {'b323ae7e-60ff-4407-a655-89e7ef119179'} is completed#033[00m Feb 20 04:53:19 localhost nova_compute[280804]: 2026-02-20 09:53:19.091 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:19 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:19.285 263745 INFO neutron.agent.linux.ip_lib [None req-95b738ba-801b-4d4d-854e-a911dbc1cedc - - - - - -] Device tap4332969c-80 cannot be used as it has no MAC address#033[00m Feb 20 04:53:19 localhost nova_compute[280804]: 2026-02-20 09:53:19.306 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:19 localhost kernel: device tap4332969c-80 entered promiscuous mode Feb 20 04:53:19 localhost NetworkManager[5967]: [1771581199.3178] manager: (tap4332969c-80): new Generic device (/org/freedesktop/NetworkManager/Devices/30) Feb 20 04:53:19 localhost ovn_controller[155916]: 2026-02-20T09:53:19Z|00133|binding|INFO|Claiming lport 4332969c-8082-494b-a0bc-6b9abf90f9f7 for this chassis. Feb 20 04:53:19 localhost ovn_controller[155916]: 2026-02-20T09:53:19Z|00134|binding|INFO|4332969c-8082-494b-a0bc-6b9abf90f9f7: Claiming unknown Feb 20 04:53:19 localhost nova_compute[280804]: 2026-02-20 09:53:19.318 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:19 localhost systemd-udevd[314767]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:53:19 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:19.330 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-8df36da0-4e6a-4f54-903b-63c7bf0a0ba1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8df36da0-4e6a-4f54-903b-63c7bf0a0ba1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bae77758d77d4d43af7ac10744892742', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d293377-fbfe-4efb-ae76-b9d87f9318ec, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4332969c-8082-494b-a0bc-6b9abf90f9f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:53:19 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:19.333 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 4332969c-8082-494b-a0bc-6b9abf90f9f7 in datapath 8df36da0-4e6a-4f54-903b-63c7bf0a0ba1 bound to our chassis#033[00m Feb 20 04:53:19 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:19.335 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8df36da0-4e6a-4f54-903b-63c7bf0a0ba1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:53:19 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:19.336 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[9c29a9a1-f03c-4a07-a47d-d320818aa299]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:53:19 localhost journal[229367]: ethtool ioctl error on tap4332969c-80: No such device Feb 20 04:53:19 localhost journal[229367]: ethtool ioctl error on tap4332969c-80: No such device Feb 20 04:53:19 localhost ovn_controller[155916]: 2026-02-20T09:53:19Z|00135|binding|INFO|Setting lport 4332969c-8082-494b-a0bc-6b9abf90f9f7 ovn-installed in OVS Feb 20 04:53:19 localhost ovn_controller[155916]: 2026-02-20T09:53:19Z|00136|binding|INFO|Setting lport 4332969c-8082-494b-a0bc-6b9abf90f9f7 up in Southbound Feb 20 04:53:19 localhost nova_compute[280804]: 2026-02-20 09:53:19.366 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:19 localhost journal[229367]: ethtool ioctl error on tap4332969c-80: No such device Feb 20 04:53:19 localhost journal[229367]: ethtool ioctl error on tap4332969c-80: No such device Feb 20 04:53:19 localhost journal[229367]: ethtool ioctl error on tap4332969c-80: No such device Feb 20 04:53:19 localhost journal[229367]: ethtool ioctl error on tap4332969c-80: No such device Feb 20 04:53:19 localhost journal[229367]: ethtool ioctl error on tap4332969c-80: No such device Feb 20 04:53:19 localhost journal[229367]: ethtool ioctl error on tap4332969c-80: No such device Feb 20 04:53:19 localhost nova_compute[280804]: 2026-02-20 09:53:19.407 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:19 localhost nova_compute[280804]: 2026-02-20 09:53:19.439 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:20 localhost podman[314838]: Feb 20 04:53:20 localhost podman[314838]: 2026-02-20 09:53:20.399730686 +0000 UTC m=+0.103086191 container create 5c6a071edfb3895eaa1412e23eeac12c26d457fa359b68ab15b578af65c54881 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8df36da0-4e6a-4f54-903b-63c7bf0a0ba1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:53:20 localhost systemd[1]: Started libpod-conmon-5c6a071edfb3895eaa1412e23eeac12c26d457fa359b68ab15b578af65c54881.scope. Feb 20 04:53:20 localhost podman[314838]: 2026-02-20 09:53:20.351448548 +0000 UTC m=+0.054804113 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:53:20 localhost systemd[1]: Started libcrun container. Feb 20 04:53:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7b5f367cefc20257c892f7719931bb8ff5e11ba635dc89605707c5c9254aa4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:53:20 localhost podman[314838]: 2026-02-20 09:53:20.492704154 +0000 UTC m=+0.196059649 container init 5c6a071edfb3895eaa1412e23eeac12c26d457fa359b68ab15b578af65c54881 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8df36da0-4e6a-4f54-903b-63c7bf0a0ba1, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127) Feb 20 04:53:20 localhost podman[314838]: 2026-02-20 09:53:20.506265318 +0000 UTC m=+0.209620823 container start 5c6a071edfb3895eaa1412e23eeac12c26d457fa359b68ab15b578af65c54881 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8df36da0-4e6a-4f54-903b-63c7bf0a0ba1, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:53:20 localhost dnsmasq[314856]: started, version 2.85 cachesize 150 Feb 20 04:53:20 localhost dnsmasq[314856]: DNS service limited to local subnets Feb 20 04:53:20 localhost dnsmasq[314856]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:53:20 localhost dnsmasq[314856]: warning: no upstream servers configured Feb 20 04:53:20 localhost dnsmasq-dhcp[314856]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:53:20 localhost dnsmasq[314856]: read /var/lib/neutron/dhcp/8df36da0-4e6a-4f54-903b-63c7bf0a0ba1/addn_hosts - 0 addresses Feb 20 04:53:20 localhost dnsmasq-dhcp[314856]: read /var/lib/neutron/dhcp/8df36da0-4e6a-4f54-903b-63c7bf0a0ba1/host Feb 20 04:53:20 localhost dnsmasq-dhcp[314856]: read /var/lib/neutron/dhcp/8df36da0-4e6a-4f54-903b-63c7bf0a0ba1/opts Feb 20 04:53:20 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:20.719 263745 INFO neutron.agent.dhcp.agent [None req-af55167c-f46d-4d6e-b952-68550c2c60b2 - - - - - -] DHCP configuration for ports {'8080e392-84d6-452d-9f80-3f69d4a2e1dd'} is completed#033[00m Feb 20 04:53:21 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:53:21 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:21 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:21 localhost podman[314872]: 2026-02-20 09:53:21.193464992 +0000 UTC m=+0.064041802 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:53:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:22 localhost nova_compute[280804]: 2026-02-20 09:53:22.280 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0. Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.622333) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46 Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581202622448, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 695, "num_deletes": 256, "total_data_size": 454867, "memory_usage": 468440, "flush_reason": "Manual Compaction"} Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581202628245, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 444548, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26023, "largest_seqno": 26717, "table_properties": {"data_size": 441222, "index_size": 1181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7843, "raw_average_key_size": 18, "raw_value_size": 434335, "raw_average_value_size": 1049, "num_data_blocks": 52, "num_entries": 414, "num_filter_entries": 414, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581162, "oldest_key_time": 1771581162, "file_creation_time": 1771581202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}} Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 5969 microseconds, and 2622 cpu microseconds. Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.628306) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 444548 bytes OK Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.628350) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.632645) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.632659) EVENT_LOG_v1 {"time_micros": 1771581202632654, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.632680) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 451246, prev total WAL file size 451570, number of live WAL files 2. Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.633431) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373731' seq:72057594037927935, type:22 .. '6C6F676D0034303232' seq:0, type:0; will stop at (end) Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(434KB)], [45(16MB)] Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581202633499, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 18043074, "oldest_snapshot_seqno": -1} Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 12183 keys, 17901784 bytes, temperature: kUnknown Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581202707020, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 17901784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17833136, "index_size": 37123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 327009, "raw_average_key_size": 26, "raw_value_size": 17626495, "raw_average_value_size": 1446, "num_data_blocks": 1409, "num_entries": 12183, "num_filter_entries": 12183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771581202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}} Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.707482) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 17901784 bytes Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.709372) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 244.9 rd, 243.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 16.8 +0.0 blob) out(17.1 +0.0 blob), read-write-amplify(80.9) write-amplify(40.3) OK, records in: 12712, records dropped: 529 output_compression: NoCompression Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.709423) EVENT_LOG_v1 {"time_micros": 1771581202709408, "job": 26, "event": "compaction_finished", "compaction_time_micros": 73672, "compaction_time_cpu_micros": 33029, "output_level": 6, "num_output_files": 1, "total_output_size": 17901784, "num_input_records": 12712, "num_output_records": 12183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581202709708, "job": 26, "event": "table_file_deletion", "file_number": 47} Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581202713109, "job": 26, "event": "table_file_deletion", "file_number": 45} Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.633259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.713283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.713292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.713295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.713298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:53:22 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:53:22.713301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:53:22 localhost systemd[1]: tmp-crun.AnbaWa.mount: Deactivated successfully. Feb 20 04:53:22 localhost dnsmasq[314856]: exiting on receipt of SIGTERM Feb 20 04:53:22 localhost systemd[1]: libpod-5c6a071edfb3895eaa1412e23eeac12c26d457fa359b68ab15b578af65c54881.scope: Deactivated successfully. Feb 20 04:53:22 localhost podman[314910]: 2026-02-20 09:53:22.908494842 +0000 UTC m=+0.061554744 container kill 5c6a071edfb3895eaa1412e23eeac12c26d457fa359b68ab15b578af65c54881 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8df36da0-4e6a-4f54-903b-63c7bf0a0ba1, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:53:22 localhost podman[314924]: 2026-02-20 09:53:22.973295333 +0000 UTC m=+0.050233720 container died 5c6a071edfb3895eaa1412e23eeac12c26d457fa359b68ab15b578af65c54881 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8df36da0-4e6a-4f54-903b-63c7bf0a0ba1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:53:23 localhost podman[314924]: 2026-02-20 09:53:23.0130052 +0000 UTC m=+0.089943517 container cleanup 5c6a071edfb3895eaa1412e23eeac12c26d457fa359b68ab15b578af65c54881 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8df36da0-4e6a-4f54-903b-63c7bf0a0ba1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:53:23 localhost systemd[1]: libpod-conmon-5c6a071edfb3895eaa1412e23eeac12c26d457fa359b68ab15b578af65c54881.scope: Deactivated successfully. Feb 20 04:53:23 localhost podman[314926]: 2026-02-20 09:53:23.054653519 +0000 UTC m=+0.123601621 container remove 5c6a071edfb3895eaa1412e23eeac12c26d457fa359b68ab15b578af65c54881 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8df36da0-4e6a-4f54-903b-63c7bf0a0ba1, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:53:23 localhost nova_compute[280804]: 2026-02-20 09:53:23.116 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:23 localhost kernel: device tap4332969c-80 left promiscuous mode Feb 20 04:53:23 localhost ovn_controller[155916]: 2026-02-20T09:53:23Z|00137|binding|INFO|Releasing lport 4332969c-8082-494b-a0bc-6b9abf90f9f7 from this chassis (sb_readonly=0) Feb 20 04:53:23 localhost ovn_controller[155916]: 2026-02-20T09:53:23Z|00138|binding|INFO|Setting lport 4332969c-8082-494b-a0bc-6b9abf90f9f7 down in Southbound Feb 20 04:53:23 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:23.134 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-8df36da0-4e6a-4f54-903b-63c7bf0a0ba1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8df36da0-4e6a-4f54-903b-63c7bf0a0ba1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bae77758d77d4d43af7ac10744892742', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d293377-fbfe-4efb-ae76-b9d87f9318ec, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4332969c-8082-494b-a0bc-6b9abf90f9f7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:53:23 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:23.136 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 4332969c-8082-494b-a0bc-6b9abf90f9f7 in datapath 8df36da0-4e6a-4f54-903b-63c7bf0a0ba1 unbound from our chassis#033[00m Feb 20 04:53:23 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:23.139 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8df36da0-4e6a-4f54-903b-63c7bf0a0ba1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:53:23 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:23.140 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c1a0db-b514-4ec9-8163-8e2fd5e0f744]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:53:23 localhost nova_compute[280804]: 2026-02-20 09:53:23.145 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:23 localhost sshd[314953]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:53:23 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:23.332 263745 INFO neutron.agent.dhcp.agent [None req-278e378d-82c2-4437-b5dc-78efb27cac24 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:53:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:53:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:53:23 Feb 20 04:53:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:53:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 04:53:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['manila_data', 'vms', 'volumes', 'manila_metadata', 'images', 'backups', '.mgr'] Feb 20 04:53:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 04:53:23 localhost podman[314955]: 2026-02-20 09:53:23.445593364 +0000 UTC m=+0.077822732 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:53:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:53:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:53:23 localhost podman[314955]: 2026-02-20 09:53:23.46182368 +0000 UTC m=+0.094053048 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:53:23 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:53:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:53:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:53:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:53:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:53:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Feb 20 04:53:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:53:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:53:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:53:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:53:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:53:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:53:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:53:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:53:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:53:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:53:23 localhost systemd[1]: tmp-crun.aILFMx.mount: Deactivated successfully. Feb 20 04:53:23 localhost systemd[1]: var-lib-containers-storage-overlay-8e7b5f367cefc20257c892f7719931bb8ff5e11ba635dc89605707c5c9254aa4-merged.mount: Deactivated successfully. Feb 20 04:53:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c6a071edfb3895eaa1412e23eeac12c26d457fa359b68ab15b578af65c54881-userdata-shm.mount: Deactivated successfully. Feb 20 04:53:23 localhost systemd[1]: run-netns-qdhcp\x2d8df36da0\x2d4e6a\x2d4f54\x2d903b\x2d63c7bf0a0ba1.mount: Deactivated successfully. Feb 20 04:53:24 localhost nova_compute[280804]: 2026-02-20 09:53:24.093 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:24 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:24.204 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:53:24 localhost nova_compute[280804]: 2026-02-20 09:53:24.487 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:25 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:25.081 2 INFO neutron.agent.securitygroups_rpc [None req-9d454723-199e-4c87-997c-435a75780787 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']#033[00m Feb 20 04:53:25 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:25.249 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:24Z, description=, device_id=c60b906a-b861-42f1-98c4-c7541cbc3cf5, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a72d772c-c3aa-45b7-af67-f5bd5fd16a4e, ip_allocation=immediate, mac_address=fa:16:3e:99:c7:8d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1249, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:53:24Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:53:25 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:25.372 263745 INFO neutron.agent.linux.ip_lib [None req-4bf28305-e103-4a7b-bddc-6ed199b3bcb2 - - - - - -] Device tapeb580d56-20 cannot be used as it has no MAC address#033[00m Feb 20 04:53:25 localhost nova_compute[280804]: 2026-02-20 09:53:25.399 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:25 localhost kernel: device tapeb580d56-20 entered promiscuous mode Feb 20 04:53:25 localhost NetworkManager[5967]: [1771581205.4098] manager: (tapeb580d56-20): new Generic device (/org/freedesktop/NetworkManager/Devices/31) Feb 20 04:53:25 localhost ovn_controller[155916]: 2026-02-20T09:53:25Z|00139|binding|INFO|Claiming lport eb580d56-20fa-4406-89e0-f0ca3158ca05 for this chassis. Feb 20 04:53:25 localhost nova_compute[280804]: 2026-02-20 09:53:25.411 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:25 localhost systemd-udevd[315011]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:53:25 localhost ovn_controller[155916]: 2026-02-20T09:53:25Z|00140|binding|INFO|eb580d56-20fa-4406-89e0-f0ca3158ca05: Claiming unknown Feb 20 04:53:25 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:25.426 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-62354c96-e21a-4a8a-a12a-fb25359f004b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62354c96-e21a-4a8a-a12a-fb25359f004b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75007688d77c439d8ee3fe7c58acf581', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1aa2939-7420-4dfe-a6f5-7140b9918ecb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=eb580d56-20fa-4406-89e0-f0ca3158ca05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:53:25 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:25.429 161766 INFO neutron.agent.ovn.metadata.agent [-] Port eb580d56-20fa-4406-89e0-f0ca3158ca05 in datapath 62354c96-e21a-4a8a-a12a-fb25359f004b bound to our chassis#033[00m Feb 20 04:53:25 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:25.430 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 62354c96-e21a-4a8a-a12a-fb25359f004b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:53:25 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:25.431 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[975e2501-7684-44d7-af5f-8c713bf673be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:53:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:25 localhost nova_compute[280804]: 2026-02-20 09:53:25.463 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:25 localhost podman[315003]: 2026-02-20 09:53:25.465459854 +0000 UTC m=+0.061838173 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:53:25 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:53:25 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:25 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:25 localhost ovn_controller[155916]: 2026-02-20T09:53:25Z|00141|binding|INFO|Setting lport eb580d56-20fa-4406-89e0-f0ca3158ca05 ovn-installed in OVS Feb 20 04:53:25 localhost ovn_controller[155916]: 2026-02-20T09:53:25Z|00142|binding|INFO|Setting lport eb580d56-20fa-4406-89e0-f0ca3158ca05 up in Southbound Feb 20 04:53:25 localhost nova_compute[280804]: 2026-02-20 09:53:25.471 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:25 localhost nova_compute[280804]: 2026-02-20 09:53:25.512 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:25 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:25.550 2 INFO neutron.agent.securitygroups_rpc [None req-1234b8d9-654d-451e-95cb-316b1fc4ede0 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']#033[00m Feb 20 04:53:25 localhost nova_compute[280804]: 2026-02-20 09:53:25.571 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:25 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:25.753 263745 INFO neutron.agent.dhcp.agent [None req-76ce8f7c-e27f-4b3a-94cd-b5d3a92f6bd9 - - - - - -] DHCP configuration for ports {'a72d772c-c3aa-45b7-af67-f5bd5fd16a4e'} is completed#033[00m Feb 20 04:53:26 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:26.321 2 INFO neutron.agent.securitygroups_rpc [None req-2cdd2daf-d30d-4deb-a790-e995ba310f91 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']#033[00m Feb 20 04:53:26 localhost podman[315082]: Feb 20 04:53:26 localhost podman[315082]: 2026-02-20 09:53:26.374628021 +0000 UTC m=+0.081744217 container create c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 20 04:53:26 localhost systemd[1]: Started libpod-conmon-c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de.scope. Feb 20 04:53:26 localhost podman[315082]: 2026-02-20 09:53:26.325691527 +0000 UTC m=+0.032807783 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:53:26 localhost systemd[1]: Started libcrun container. Feb 20 04:53:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57494ec65f4b2a64ec6198da64ee415dbf63377598c5a35821e48d42fc27b025/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:53:26 localhost podman[315082]: 2026-02-20 09:53:26.455579166 +0000 UTC m=+0.162695362 container init c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:53:26 localhost podman[315082]: 2026-02-20 09:53:26.465150614 +0000 UTC m=+0.172266820 container start c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:53:26 localhost dnsmasq[315099]: started, version 2.85 cachesize 150 Feb 20 04:53:26 localhost dnsmasq[315099]: DNS service limited to local subnets Feb 20 04:53:26 localhost dnsmasq[315099]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:53:26 localhost dnsmasq[315099]: warning: no upstream servers configured Feb 20 04:53:26 localhost dnsmasq-dhcp[315099]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 20 04:53:26 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 0 addresses Feb 20 04:53:26 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:26 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:26 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:26.533 263745 INFO neutron.agent.dhcp.agent [None req-4bf28305-e103-4a7b-bddc-6ed199b3bcb2 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:24Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a591c104-dc4c-4de0-a9bb-a12f6df03f7b, ip_allocation=immediate, mac_address=fa:16:3e:18:7f:72, name=tempest-AllowedAddressPairIpV6TestJSON-1352046569, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:53:23Z, description=, dns_domain=, id=62354c96-e21a-4a8a-a12a-fb25359f004b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-694994379, port_security_enabled=True, project_id=75007688d77c439d8ee3fe7c58acf581, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1776, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1229, status=ACTIVE, subnets=['436eb28f-1d15-4653-93f5-1d53b4df7231'], tags=[], tenant_id=75007688d77c439d8ee3fe7c58acf581, updated_at=2026-02-20T09:53:24Z, vlan_transparent=None, network_id=62354c96-e21a-4a8a-a12a-fb25359f004b, port_security_enabled=True, project_id=75007688d77c439d8ee3fe7c58acf581, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9ad83b0d-3fc1-4df1-be0c-b38363c67626'], standard_attr_id=1250, status=DOWN, tags=[], tenant_id=75007688d77c439d8ee3fe7c58acf581, updated_at=2026-02-20T09:53:24Z on network 62354c96-e21a-4a8a-a12a-fb25359f004b#033[00m Feb 20 04:53:26 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:26.721 263745 INFO neutron.agent.dhcp.agent [None req-010918a9-3c89-4a6d-b8d2-3e4526726aac - - - - - -] DHCP configuration for ports {'e6a28755-9e91-4627-9eda-65d04849ab12'} is completed#033[00m Feb 20 04:53:26 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 1 addresses Feb 20 04:53:26 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:26 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:26 localhost podman[315119]: 2026-02-20 09:53:26.738097467 +0000 UTC m=+0.064185636 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:53:26 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:26.909 263745 INFO neutron.agent.dhcp.agent [None req-4bf28305-e103-4a7b-bddc-6ed199b3bcb2 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:25Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cce79ff5-b712-4968-ae27-8d376448491f, ip_allocation=immediate, mac_address=fa:16:3e:8d:cd:67, name=tempest-AllowedAddressPairIpV6TestJSON-636051552, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:53:23Z, description=, dns_domain=, id=62354c96-e21a-4a8a-a12a-fb25359f004b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-694994379, port_security_enabled=True, project_id=75007688d77c439d8ee3fe7c58acf581, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1776, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1229, status=ACTIVE, subnets=['436eb28f-1d15-4653-93f5-1d53b4df7231'], tags=[], tenant_id=75007688d77c439d8ee3fe7c58acf581, updated_at=2026-02-20T09:53:24Z, vlan_transparent=None, network_id=62354c96-e21a-4a8a-a12a-fb25359f004b, port_security_enabled=True, project_id=75007688d77c439d8ee3fe7c58acf581, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9ad83b0d-3fc1-4df1-be0c-b38363c67626'], standard_attr_id=1251, status=DOWN, tags=[], tenant_id=75007688d77c439d8ee3fe7c58acf581, updated_at=2026-02-20T09:53:25Z on network 62354c96-e21a-4a8a-a12a-fb25359f004b#033[00m Feb 20 04:53:26 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:26.973 263745 INFO neutron.agent.dhcp.agent [None req-c88ff04d-32cd-44c8-a6cf-ec4890dca8b7 - - - - - -] DHCP configuration for ports {'a591c104-dc4c-4de0-a9bb-a12f6df03f7b'} is completed#033[00m Feb 20 04:53:27 localhost podman[315156]: 2026-02-20 09:53:27.116253487 +0000 UTC m=+0.058956195 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:53:27 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 2 addresses Feb 20 04:53:27 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:27 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:27 localhost nova_compute[280804]: 2026-02-20 09:53:27.255 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:27 localhost nova_compute[280804]: 2026-02-20 09:53:27.282 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:27 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:27.310 2 INFO neutron.agent.securitygroups_rpc [None req-170d638f-d647-4f80-a7a3-f133bd9dbf7c 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']#033[00m Feb 20 04:53:27 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:27.401 263745 INFO neutron.agent.dhcp.agent [None req-78987aa8-4857-4b8b-9877-26617004ae48 - - - - - -] DHCP configuration for ports {'cce79ff5-b712-4968-ae27-8d376448491f'} is completed#033[00m Feb 20 04:53:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:27 localhost podman[315194]: 2026-02-20 09:53:27.508126006 +0000 UTC m=+0.068711927 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127) Feb 20 04:53:27 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 1 addresses Feb 20 04:53:27 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:27 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:53:27 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:27.730 263745 INFO neutron.agent.dhcp.agent [None req-4bf28305-e103-4a7b-bddc-6ed199b3bcb2 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:26Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=07dc6f94-d863-419a-80ae-b6b10124d990, ip_allocation=immediate, mac_address=fa:16:3e:0a:3a:19, name=tempest-AllowedAddressPairIpV6TestJSON-204471968, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:53:23Z, description=, dns_domain=, id=62354c96-e21a-4a8a-a12a-fb25359f004b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-694994379, port_security_enabled=True, project_id=75007688d77c439d8ee3fe7c58acf581, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1776, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1229, status=ACTIVE, subnets=['436eb28f-1d15-4653-93f5-1d53b4df7231'], tags=[], tenant_id=75007688d77c439d8ee3fe7c58acf581, updated_at=2026-02-20T09:53:24Z, vlan_transparent=None, network_id=62354c96-e21a-4a8a-a12a-fb25359f004b, port_security_enabled=True, project_id=75007688d77c439d8ee3fe7c58acf581, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9ad83b0d-3fc1-4df1-be0c-b38363c67626'], standard_attr_id=1257, status=DOWN, tags=[], tenant_id=75007688d77c439d8ee3fe7c58acf581, updated_at=2026-02-20T09:53:26Z on network 62354c96-e21a-4a8a-a12a-fb25359f004b#033[00m Feb 20 04:53:27 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 2 addresses Feb 20 04:53:27 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:27 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:27 localhost podman[315233]: 2026-02-20 09:53:27.937557724 +0000 UTC m=+0.060403563 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:53:28 localhost openstack_network_exporter[243776]: ERROR 09:53:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:53:28 localhost openstack_network_exporter[243776]: Feb 20 04:53:28 localhost openstack_network_exporter[243776]: ERROR 09:53:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:53:28 localhost openstack_network_exporter[243776]: Feb 20 04:53:28 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:28.242 263745 INFO neutron.agent.dhcp.agent [None req-651b92fb-ae85-416e-8dc8-8c168af3c0fe - - - - - -] DHCP configuration for ports {'07dc6f94-d863-419a-80ae-b6b10124d990'} is completed#033[00m Feb 20 04:53:28 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:28.916 2 INFO neutron.agent.securitygroups_rpc [None req-c8a66b7a-94e3-4469-9bdc-a709861d759e 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']#033[00m Feb 20 04:53:29 localhost nova_compute[280804]: 2026-02-20 09:53:29.139 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:29 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 1 addresses Feb 20 04:53:29 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:29 localhost podman[315270]: 2026-02-20 09:53:29.1496185 +0000 UTC m=+0.066817307 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Feb 20 04:53:29 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:29 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:29.560 2 INFO neutron.agent.securitygroups_rpc [None req-47b66a4e-d860-4539-a526-725ae67efd11 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']#033[00m Feb 20 04:53:29 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:29.613 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:29Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e5d46d0a-cd4c-4db3-84cf-e53a7a2ecce9, ip_allocation=immediate, mac_address=fa:16:3e:99:00:2a, name=tempest-AllowedAddressPairIpV6TestJSON-2105487789, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:53:23Z, description=, dns_domain=, id=62354c96-e21a-4a8a-a12a-fb25359f004b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-694994379, port_security_enabled=True, project_id=75007688d77c439d8ee3fe7c58acf581, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1776, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1229, status=ACTIVE, subnets=['436eb28f-1d15-4653-93f5-1d53b4df7231'], tags=[], tenant_id=75007688d77c439d8ee3fe7c58acf581, updated_at=2026-02-20T09:53:24Z, vlan_transparent=None, network_id=62354c96-e21a-4a8a-a12a-fb25359f004b, port_security_enabled=True, project_id=75007688d77c439d8ee3fe7c58acf581, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9ad83b0d-3fc1-4df1-be0c-b38363c67626'], standard_attr_id=1264, status=DOWN, tags=[], tenant_id=75007688d77c439d8ee3fe7c58acf581, updated_at=2026-02-20T09:53:29Z on network 62354c96-e21a-4a8a-a12a-fb25359f004b#033[00m Feb 20 04:53:29 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 2 addresses Feb 20 04:53:29 localhost podman[315308]: 2026-02-20 09:53:29.783515581 +0000 UTC m=+0.056428707 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:53:29 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:29 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:29 localhost nova_compute[280804]: 2026-02-20 09:53:29.995 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:30 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:30.011 263745 INFO neutron.agent.dhcp.agent [None req-2c162698-9406-4a6f-a7f2-eb961f62339f - - - - - -] DHCP configuration for ports {'e5d46d0a-cd4c-4db3-84cf-e53a7a2ecce9'} is completed#033[00m Feb 20 04:53:30 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:30.464 2 INFO neutron.agent.securitygroups_rpc [None req-11ba55d2-9392-4bf8-866e-0f6b7421a111 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']#033[00m Feb 20 04:53:30 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 1 addresses Feb 20 04:53:30 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:30 localhost podman[315347]: 2026-02-20 09:53:30.673249067 +0000 UTC m=+0.065489901 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:53:30 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:30 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:30.874 2 INFO neutron.agent.securitygroups_rpc [None req-813a578a-3c42-4faa-a8fa-18fef9f75e4f 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']#033[00m Feb 20 04:53:30 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:30.915 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:30Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7dfff904-90fe-401f-81fe-458656291622, ip_allocation=immediate, mac_address=fa:16:3e:18:74:39, name=tempest-AllowedAddressPairIpV6TestJSON-1880848562, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:53:23Z, description=, dns_domain=, id=62354c96-e21a-4a8a-a12a-fb25359f004b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-694994379, port_security_enabled=True, project_id=75007688d77c439d8ee3fe7c58acf581, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1776, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1229, status=ACTIVE, subnets=['436eb28f-1d15-4653-93f5-1d53b4df7231'], tags=[], tenant_id=75007688d77c439d8ee3fe7c58acf581, updated_at=2026-02-20T09:53:24Z, vlan_transparent=None, network_id=62354c96-e21a-4a8a-a12a-fb25359f004b, port_security_enabled=True, project_id=75007688d77c439d8ee3fe7c58acf581, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9ad83b0d-3fc1-4df1-be0c-b38363c67626'], standard_attr_id=1275, status=DOWN, tags=[], tenant_id=75007688d77c439d8ee3fe7c58acf581, updated_at=2026-02-20T09:53:30Z on network 62354c96-e21a-4a8a-a12a-fb25359f004b#033[00m Feb 20 04:53:31 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 2 addresses Feb 20 04:53:31 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:31 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:31 localhost podman[315386]: 2026-02-20 09:53:31.131242462 +0000 UTC m=+0.071039530 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:53:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:31 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:31.467 263745 INFO neutron.agent.dhcp.agent [None req-6d08b919-8127-46f8-935b-26ff21cf1cd9 - - - - - -] DHCP configuration for ports {'7dfff904-90fe-401f-81fe-458656291622'} is completed#033[00m Feb 20 04:53:31 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:31.475 2 INFO neutron.agent.securitygroups_rpc [None req-cf8e4aa2-945c-4732-b1d9-06cd52c9d8b9 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']#033[00m Feb 20 04:53:31 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:31.548 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:31Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4a0878a4-9cca-45bf-afa6-2fe5e60318ce, ip_allocation=immediate, mac_address=fa:16:3e:a1:1f:69, name=tempest-AllowedAddressPairIpV6TestJSON-1489915769, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:53:23Z, description=, dns_domain=, id=62354c96-e21a-4a8a-a12a-fb25359f004b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-694994379, port_security_enabled=True, project_id=75007688d77c439d8ee3fe7c58acf581, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1776, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1229, status=ACTIVE, subnets=['436eb28f-1d15-4653-93f5-1d53b4df7231'], tags=[], tenant_id=75007688d77c439d8ee3fe7c58acf581, updated_at=2026-02-20T09:53:24Z, vlan_transparent=None, network_id=62354c96-e21a-4a8a-a12a-fb25359f004b, port_security_enabled=True, project_id=75007688d77c439d8ee3fe7c58acf581, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9ad83b0d-3fc1-4df1-be0c-b38363c67626'], standard_attr_id=1279, status=DOWN, tags=[], tenant_id=75007688d77c439d8ee3fe7c58acf581, updated_at=2026-02-20T09:53:31Z on network 62354c96-e21a-4a8a-a12a-fb25359f004b#033[00m Feb 20 04:53:31 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 3 addresses Feb 20 04:53:31 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:31 localhost podman[315425]: 2026-02-20 09:53:31.753294955 +0000 UTC m=+0.065127501 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 20 04:53:31 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:32 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:32.048 263745 INFO neutron.agent.dhcp.agent [None req-c7f7322a-a421-4494-9a3e-fe887195a823 - - - - - -] DHCP configuration for ports {'4a0878a4-9cca-45bf-afa6-2fe5e60318ce'} is completed#033[00m Feb 20 04:53:32 localhost nova_compute[280804]: 2026-02-20 09:53:32.308 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:53:32 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:32.890 2 INFO neutron.agent.securitygroups_rpc [None req-0ba4bd77-300a-4aef-8bf9-70b27ff0d0d5 eedc91db7da847aab912b3b8401d5b18 8d5c2f81bbf4423c8ccdbeb44081c499 - - default default] Security group member updated ['943c86ba-7264-4974-89ae-938b95d72620']#033[00m Feb 20 04:53:33 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:33.264 2 INFO neutron.agent.securitygroups_rpc [None req-e49f1726-acc6-4366-8506-477a12f2a7e4 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']#033[00m Feb 20 04:53:33 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:33.265 2 INFO neutron.agent.securitygroups_rpc [None req-bbdf9d9d-afcb-4396-b80a-79eb3001d8e5 eedc91db7da847aab912b3b8401d5b18 8d5c2f81bbf4423c8ccdbeb44081c499 - - default default] Security group member updated ['943c86ba-7264-4974-89ae-938b95d72620']#033[00m Feb 20 04:53:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:53:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:53:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:33 localhost systemd[1]: tmp-crun.srpean.mount: Deactivated successfully. Feb 20 04:53:33 localhost podman[315459]: 2026-02-20 09:53:33.472497417 +0000 UTC m=+0.104828097 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, container_name=openstack_network_exporter, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347) Feb 20 04:53:33 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 2 addresses Feb 20 04:53:33 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:33 localhost podman[315486]: 2026-02-20 09:53:33.488117907 +0000 UTC m=+0.060157468 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:53:33 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:33 localhost podman[315460]: 2026-02-20 09:53:33.572853743 +0000 UTC m=+0.199917312 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:53:33 localhost podman[315459]: 2026-02-20 09:53:33.583836108 +0000 UTC m=+0.216166848 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2026-02-05T04:57:10Z, release=1770267347, version=9.7, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 04:53:33 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:53:33 localhost podman[315460]: 2026-02-20 09:53:33.638152967 +0000 UTC m=+0.265216566 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:53:33 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:53:33 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:33.969 2 INFO neutron.agent.securitygroups_rpc [None req-55d03934-9517-419a-915b-7eb31a90c9a3 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']#033[00m Feb 20 04:53:34 localhost nova_compute[280804]: 2026-02-20 09:53:34.189 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:34 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 1 addresses Feb 20 04:53:34 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:34 localhost podman[315541]: 2026-02-20 09:53:34.258634059 +0000 UTC m=+0.054724122 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:53:34 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:34 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:34.372 2 INFO neutron.agent.securitygroups_rpc [None req-b58a03ae-8801-426a-8262-b9afab11fa37 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']#033[00m Feb 20 04:53:34 localhost dnsmasq[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/addn_hosts - 0 addresses Feb 20 04:53:34 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/host Feb 20 04:53:34 localhost podman[315580]: 2026-02-20 09:53:34.613212786 +0000 UTC m=+0.061783221 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:53:34 localhost dnsmasq-dhcp[315099]: read /var/lib/neutron/dhcp/62354c96-e21a-4a8a-a12a-fb25359f004b/opts Feb 20 04:53:34 localhost systemd[1]: tmp-crun.pUerwr.mount: Deactivated successfully. Feb 20 04:53:35 localhost dnsmasq[315099]: exiting on receipt of SIGTERM Feb 20 04:53:35 localhost systemd[1]: tmp-crun.tef54s.mount: Deactivated successfully. Feb 20 04:53:35 localhost podman[315617]: 2026-02-20 09:53:35.055137339 +0000 UTC m=+0.067991097 container kill c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:53:35 localhost systemd[1]: libpod-c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de.scope: Deactivated successfully. Feb 20 04:53:35 localhost podman[315631]: 2026-02-20 09:53:35.109286964 +0000 UTC m=+0.038627428 container died c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:53:35 localhost podman[315631]: 2026-02-20 09:53:35.135210481 +0000 UTC m=+0.064550925 container cleanup c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:53:35 localhost systemd[1]: libpod-conmon-c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de.scope: Deactivated successfully. Feb 20 04:53:35 localhost podman[315633]: 2026-02-20 09:53:35.200871535 +0000 UTC m=+0.122311658 container remove c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62354c96-e21a-4a8a-a12a-fb25359f004b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:53:35 localhost nova_compute[280804]: 2026-02-20 09:53:35.259 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:35 localhost ovn_controller[155916]: 2026-02-20T09:53:35Z|00143|binding|INFO|Releasing lport eb580d56-20fa-4406-89e0-f0ca3158ca05 from this chassis (sb_readonly=0) Feb 20 04:53:35 localhost ovn_controller[155916]: 2026-02-20T09:53:35Z|00144|binding|INFO|Setting lport eb580d56-20fa-4406-89e0-f0ca3158ca05 down in Southbound Feb 20 04:53:35 localhost kernel: device tapeb580d56-20 left promiscuous mode Feb 20 04:53:35 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:35.271 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-62354c96-e21a-4a8a-a12a-fb25359f004b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62354c96-e21a-4a8a-a12a-fb25359f004b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75007688d77c439d8ee3fe7c58acf581', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1aa2939-7420-4dfe-a6f5-7140b9918ecb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=eb580d56-20fa-4406-89e0-f0ca3158ca05) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:53:35 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:35.273 161766 INFO neutron.agent.ovn.metadata.agent [-] Port eb580d56-20fa-4406-89e0-f0ca3158ca05 in datapath 62354c96-e21a-4a8a-a12a-fb25359f004b unbound from our chassis#033[00m Feb 20 04:53:35 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:35.275 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 62354c96-e21a-4a8a-a12a-fb25359f004b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:53:35 localhost nova_compute[280804]: 2026-02-20 09:53:35.276 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:35 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:35.276 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[52454a32-6184-4296-9b8a-f4d45661f180]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:53:35 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:35.366 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:53:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:35 localhost systemd[1]: var-lib-containers-storage-overlay-57494ec65f4b2a64ec6198da64ee415dbf63377598c5a35821e48d42fc27b025-merged.mount: Deactivated successfully. Feb 20 04:53:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2a0ed99f86d243f75830e805528c52f50c28e8cc9d426371b824f1832b7c8de-userdata-shm.mount: Deactivated successfully. Feb 20 04:53:35 localhost systemd[1]: run-netns-qdhcp\x2d62354c96\x2de21a\x2d4a8a\x2da12a\x2dfb25359f004b.mount: Deactivated successfully. Feb 20 04:53:36 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:36.375 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:53:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:53:37 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:53:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:53:37 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:53:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:53:37 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:53:37 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev dd9cdc4d-04b4-45a1-a165-93880365c2f0 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:53:37 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev dd9cdc4d-04b4-45a1-a165-93880365c2f0 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:53:37 localhost ceph-mgr[286565]: [progress INFO root] Completed event dd9cdc4d-04b4-45a1-a165-93880365c2f0 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:53:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:53:37 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:53:37 localhost nova_compute[280804]: 2026-02-20 09:53:37.191 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:37 localhost nova_compute[280804]: 2026-02-20 09:53:37.310 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v181: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:53:37 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:53:37 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:53:38 localhost nova_compute[280804]: 2026-02-20 09:53:38.291 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:38 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:53:38 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:38 localhost podman[315767]: 2026-02-20 09:53:38.313129246 +0000 UTC m=+0.065193453 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:53:38 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:38 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:53:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:53:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:53:39 localhost sshd[315789]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:53:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:53:39 localhost nova_compute[280804]: 2026-02-20 09:53:39.229 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:39 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:39.569 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:39Z, description=, device_id=3e1ef26e-7221-4b4f-b68b-3c6c5441a6ed, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=13f97906-104c-4174-80d8-81ef503e21e1, ip_allocation=immediate, mac_address=fa:16:3e:a4:ed:42, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1343, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:53:39Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:53:39 localhost podman[315808]: 2026-02-20 09:53:39.782920567 +0000 UTC m=+0.062260864 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:53:39 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:53:39 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:39 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:40 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:40.033 263745 INFO neutron.agent.dhcp.agent [None req-5813996c-ad42-4c88-a921-8448dbed779d - - - - - -] DHCP configuration for ports {'13f97906-104c-4174-80d8-81ef503e21e1'} is completed#033[00m Feb 20 04:53:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:53:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:53:41 localhost systemd[1]: tmp-crun.BRBWit.mount: Deactivated successfully. Feb 20 04:53:41 localhost podman[315828]: 2026-02-20 09:53:41.446549156 +0000 UTC m=+0.087664127 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:53:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:41 localhost podman[315828]: 2026-02-20 09:53:41.531860908 +0000 UTC m=+0.172975869 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller) Feb 20 04:53:41 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:53:41 localhost podman[315876]: 2026-02-20 09:53:41.569826118 +0000 UTC m=+0.048302410 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:53:41 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:53:41 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:41 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:41 localhost podman[315829]: 2026-02-20 09:53:41.605918067 +0000 UTC m=+0.240207724 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 20 04:53:41 localhost podman[315829]: 2026-02-20 09:53:41.639824339 +0000 UTC m=+0.274113956 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent) Feb 20 04:53:41 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:53:42 localhost nova_compute[280804]: 2026-02-20 09:53:42.313 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:42 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:42.399 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:42Z, description=, device_id=f10bf8a8-99a0-4751-9224-719be0dde405, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7efc1872-a219-4bbf-9d9b-81d6f8c51366, ip_allocation=immediate, mac_address=fa:16:3e:e8:39:db, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1355, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:53:42Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:53:42 localhost systemd[1]: tmp-crun.q52Gui.mount: Deactivated successfully. Feb 20 04:53:42 localhost podman[315924]: 2026-02-20 09:53:42.598033674 +0000 UTC m=+0.058311428 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 20 04:53:42 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:53:42 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:42 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:53:42 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:42.792 263745 INFO neutron.agent.dhcp.agent [None req-abb43f25-a29b-44d8-9462-113208fa16cb - - - - - -] DHCP configuration for ports {'7efc1872-a219-4bbf-9d9b-81d6f8c51366'} is completed#033[00m Feb 20 04:53:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:43 localhost nova_compute[280804]: 2026-02-20 09:53:43.503 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:43 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:43.662 263745 INFO neutron.agent.linux.ip_lib [None req-14921bba-270c-4178-8c0f-0603f8493de3 - - - - - -] Device tap704a8f4e-e5 cannot be used as it has no MAC address#033[00m Feb 20 04:53:43 localhost nova_compute[280804]: 2026-02-20 09:53:43.685 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:43 localhost kernel: device tap704a8f4e-e5 entered promiscuous mode Feb 20 04:53:43 localhost NetworkManager[5967]: [1771581223.6932] manager: (tap704a8f4e-e5): new Generic device (/org/freedesktop/NetworkManager/Devices/32) Feb 20 04:53:43 localhost ovn_controller[155916]: 2026-02-20T09:53:43Z|00145|binding|INFO|Claiming lport 704a8f4e-e5a8-4d6e-adcf-92cb3e12e561 for this chassis. Feb 20 04:53:43 localhost ovn_controller[155916]: 2026-02-20T09:53:43Z|00146|binding|INFO|704a8f4e-e5a8-4d6e-adcf-92cb3e12e561: Claiming unknown Feb 20 04:53:43 localhost nova_compute[280804]: 2026-02-20 09:53:43.694 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:43 localhost systemd-udevd[315955]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:53:43 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:43.706 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-80eb099a-71b4-4d12-b8bf-3ac8a35dcbab', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80eb099a-71b4-4d12-b8bf-3ac8a35dcbab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=650ba84b-ca25-4971-89b7-5e62694b560a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=704a8f4e-e5a8-4d6e-adcf-92cb3e12e561) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:53:43 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:43.708 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 704a8f4e-e5a8-4d6e-adcf-92cb3e12e561 in datapath 80eb099a-71b4-4d12-b8bf-3ac8a35dcbab bound to our chassis#033[00m Feb 20 04:53:43 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:43.709 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 80eb099a-71b4-4d12-b8bf-3ac8a35dcbab or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:53:43 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:43.710 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[149299cd-de8b-41c6-b4fa-3babf46daf6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:53:43 localhost journal[229367]: ethtool ioctl error on tap704a8f4e-e5: No such device Feb 20 04:53:43 localhost ovn_controller[155916]: 2026-02-20T09:53:43Z|00147|binding|INFO|Setting lport 704a8f4e-e5a8-4d6e-adcf-92cb3e12e561 ovn-installed in OVS Feb 20 04:53:43 localhost ovn_controller[155916]: 2026-02-20T09:53:43Z|00148|binding|INFO|Setting lport 704a8f4e-e5a8-4d6e-adcf-92cb3e12e561 up in Southbound Feb 20 04:53:43 localhost journal[229367]: ethtool ioctl error on tap704a8f4e-e5: No such device Feb 20 04:53:43 localhost nova_compute[280804]: 2026-02-20 09:53:43.727 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:43 localhost journal[229367]: ethtool ioctl error on tap704a8f4e-e5: No such device Feb 20 04:53:43 localhost journal[229367]: ethtool ioctl error on tap704a8f4e-e5: No such device Feb 20 04:53:43 localhost journal[229367]: ethtool ioctl error on tap704a8f4e-e5: No such device Feb 20 04:53:43 localhost journal[229367]: ethtool ioctl error on tap704a8f4e-e5: No such device Feb 20 04:53:43 localhost journal[229367]: ethtool ioctl error on tap704a8f4e-e5: No such device Feb 20 04:53:43 localhost journal[229367]: ethtool ioctl error on tap704a8f4e-e5: No such device Feb 20 04:53:43 localhost nova_compute[280804]: 2026-02-20 09:53:43.766 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:43 localhost nova_compute[280804]: 2026-02-20 09:53:43.795 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:44 localhost nova_compute[280804]: 2026-02-20 09:53:44.232 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:44 localhost podman[316026]: Feb 20 04:53:44 localhost podman[316026]: 2026-02-20 09:53:44.582266997 +0000 UTC m=+0.089121826 container create f77cf99bb52fcc59851208e4ba45e6b077d81391c36409c07f231dc03237a964 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80eb099a-71b4-4d12-b8bf-3ac8a35dcbab, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:53:44 localhost systemd[1]: Started libpod-conmon-f77cf99bb52fcc59851208e4ba45e6b077d81391c36409c07f231dc03237a964.scope. Feb 20 04:53:44 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:44.620 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:44Z, description=, device_id=c4ebdb8c-76f0-4ea0-845c-a5a71b7e7df0, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fb95c587-e952-4213-a1e8-fbb5e2ce7773, ip_allocation=immediate, mac_address=fa:16:3e:28:b7:28, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1376, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:53:44Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:53:44 localhost systemd[1]: Started libcrun container. Feb 20 04:53:44 localhost podman[316026]: 2026-02-20 09:53:44.537862824 +0000 UTC m=+0.044717713 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:53:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cfaab59943e6769fda93fbc1b2cb802e47cf20cc521e81d34e7ffc0588b718a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:53:44 localhost podman[316026]: 2026-02-20 09:53:44.651012243 +0000 UTC m=+0.157867072 container init f77cf99bb52fcc59851208e4ba45e6b077d81391c36409c07f231dc03237a964 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80eb099a-71b4-4d12-b8bf-3ac8a35dcbab, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:53:44 localhost podman[316026]: 2026-02-20 09:53:44.666169381 +0000 UTC m=+0.173024220 container start f77cf99bb52fcc59851208e4ba45e6b077d81391c36409c07f231dc03237a964 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80eb099a-71b4-4d12-b8bf-3ac8a35dcbab, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:53:44 localhost dnsmasq[316057]: started, version 2.85 cachesize 150 Feb 20 04:53:44 localhost dnsmasq[316057]: DNS service limited to local subnets Feb 20 04:53:44 localhost dnsmasq[316057]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:53:44 localhost dnsmasq[316057]: warning: no upstream servers configured Feb 20 04:53:44 localhost dnsmasq-dhcp[316057]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 20 04:53:44 localhost dnsmasq[316057]: read /var/lib/neutron/dhcp/80eb099a-71b4-4d12-b8bf-3ac8a35dcbab/addn_hosts - 0 addresses Feb 20 04:53:44 localhost dnsmasq-dhcp[316057]: read /var/lib/neutron/dhcp/80eb099a-71b4-4d12-b8bf-3ac8a35dcbab/host Feb 20 04:53:44 localhost dnsmasq-dhcp[316057]: read /var/lib/neutron/dhcp/80eb099a-71b4-4d12-b8bf-3ac8a35dcbab/opts Feb 20 04:53:44 localhost podman[316039]: 2026-02-20 09:53:44.755024169 +0000 UTC m=+0.134971338 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:53:44 localhost podman[316039]: 2026-02-20 09:53:44.765691685 +0000 UTC m=+0.145638844 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:53:44 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:53:44 localhost podman[316087]: 2026-02-20 09:53:44.838715507 +0000 UTC m=+0.058729469 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127) Feb 20 04:53:44 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:53:44 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:44 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:44 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:44.848 263745 INFO neutron.agent.dhcp.agent [None req-7eb98e8d-513e-4ae1-a5d7-17b6ecb2ffda - - - - - -] DHCP configuration for ports {'acfa43eb-1023-412f-9bfd-8d3f7f55f8ca'} is completed#033[00m Feb 20 04:53:45 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:45.109 263745 INFO neutron.agent.dhcp.agent [None req-b4867150-cd99-40ef-8bd4-f9ddc723cbc0 - - - - - -] DHCP configuration for ports {'fb95c587-e952-4213-a1e8-fbb5e2ce7773'} is completed#033[00m Feb 20 04:53:45 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:45.185 2 INFO neutron.agent.securitygroups_rpc [None req-a4da6bb1-700c-4d71-a646-fe34335ad1c4 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']#033[00m Feb 20 04:53:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:45 localhost systemd[1]: tmp-crun.G2cQhk.mount: Deactivated successfully. Feb 20 04:53:45 localhost nova_compute[280804]: 2026-02-20 09:53:45.755 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:46 localhost podman[241347]: time="2026-02-20T09:53:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:53:46 localhost podman[241347]: @ - - [20/Feb/2026:09:53:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159533 "" "Go-http-client/1.1" Feb 20 04:53:46 localhost podman[241347]: @ - - [20/Feb/2026:09:53:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19247 "" "Go-http-client/1.1" Feb 20 04:53:46 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:46.468 2 INFO neutron.agent.securitygroups_rpc [None req-82d4853b-8792-42bf-a9bd-621206147606 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']#033[00m Feb 20 04:53:47 localhost nova_compute[280804]: 2026-02-20 09:53:47.339 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Feb 20 04:53:47 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:47.546 2 INFO neutron.agent.securitygroups_rpc [None req-2045a801-a884-4a06-b206-987ac9e8d82c 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']#033[00m Feb 20 04:53:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:53:47 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:53:47 localhost podman[316126]: 2026-02-20 09:53:47.751639651 +0000 UTC m=+0.051044422 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 20 04:53:47 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:47 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:47 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:47.782 263745 INFO neutron.agent.linux.ip_lib [None req-fe2550a0-0807-4756-8dca-5fe6ce2e94f4 - - - - - -] Device tapef5e03c2-67 cannot be used as it has no MAC address#033[00m Feb 20 04:53:47 localhost nova_compute[280804]: 2026-02-20 09:53:47.812 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:47 localhost kernel: device tapef5e03c2-67 entered promiscuous mode Feb 20 04:53:47 localhost NetworkManager[5967]: [1771581227.8209] manager: (tapef5e03c2-67): new Generic device (/org/freedesktop/NetworkManager/Devices/33) Feb 20 04:53:47 localhost nova_compute[280804]: 2026-02-20 09:53:47.821 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:47 localhost ovn_controller[155916]: 2026-02-20T09:53:47Z|00149|binding|INFO|Claiming lport ef5e03c2-67e1-458e-8512-d3edab09dd6b for this chassis. Feb 20 04:53:47 localhost ovn_controller[155916]: 2026-02-20T09:53:47Z|00150|binding|INFO|ef5e03c2-67e1-458e-8512-d3edab09dd6b: Claiming unknown Feb 20 04:53:47 localhost systemd-udevd[316148]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:53:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:47.841 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-0dcc8031-e826-4e7f-94b5-a461dcf3017e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0dcc8031-e826-4e7f-94b5-a461dcf3017e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3604cf69-2dd7-4dc4-bcc1-55349285fbc2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ef5e03c2-67e1-458e-8512-d3edab09dd6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:53:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:47.842 161766 INFO neutron.agent.ovn.metadata.agent [-] Port ef5e03c2-67e1-458e-8512-d3edab09dd6b in datapath 0dcc8031-e826-4e7f-94b5-a461dcf3017e bound to our chassis#033[00m Feb 20 04:53:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:47.843 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0dcc8031-e826-4e7f-94b5-a461dcf3017e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:53:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:47.844 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b25fa6-0fa3-43fa-922c-c1bd8dd3cbb6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:53:47 localhost ovn_controller[155916]: 2026-02-20T09:53:47Z|00151|binding|INFO|Setting lport ef5e03c2-67e1-458e-8512-d3edab09dd6b ovn-installed in OVS Feb 20 04:53:47 localhost ovn_controller[155916]: 2026-02-20T09:53:47Z|00152|binding|INFO|Setting lport ef5e03c2-67e1-458e-8512-d3edab09dd6b up in Southbound Feb 20 04:53:47 localhost nova_compute[280804]: 2026-02-20 09:53:47.870 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:47 localhost nova_compute[280804]: 2026-02-20 09:53:47.920 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:47 localhost nova_compute[280804]: 2026-02-20 09:53:47.943 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e112 do_prune osdmap full prune enabled Feb 20 04:53:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e113 e113: 6 total, 6 up, 6 in Feb 20 04:53:48 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e113: 6 total, 6 up, 6 in Feb 20 04:53:48 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:48.556 2 INFO neutron.agent.securitygroups_rpc [None req-34460107-5767-4790-bb51-43f170627a06 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']#033[00m Feb 20 04:53:48 localhost podman[316208]: Feb 20 04:53:48 localhost podman[316208]: 2026-02-20 09:53:48.755672598 +0000 UTC m=+0.066244761 container create 4727377b0d012f14b5c0b728dabdcc9762cac60b3a0fbeb1d180aedad4e3a031 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0dcc8031-e826-4e7f-94b5-a461dcf3017e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 20 04:53:48 localhost systemd[1]: Started libpod-conmon-4727377b0d012f14b5c0b728dabdcc9762cac60b3a0fbeb1d180aedad4e3a031.scope. Feb 20 04:53:48 localhost systemd[1]: Started libcrun container. Feb 20 04:53:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64cafc97e97665fa95075f2860b827c229dad7c7a254b46c59e2c32662414720/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:53:48 localhost podman[316208]: 2026-02-20 09:53:48.804977193 +0000 UTC m=+0.115549356 container init 4727377b0d012f14b5c0b728dabdcc9762cac60b3a0fbeb1d180aedad4e3a031 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0dcc8031-e826-4e7f-94b5-a461dcf3017e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:53:48 localhost podman[316208]: 2026-02-20 09:53:48.814551511 +0000 UTC m=+0.125123674 container start 4727377b0d012f14b5c0b728dabdcc9762cac60b3a0fbeb1d180aedad4e3a031 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0dcc8031-e826-4e7f-94b5-a461dcf3017e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:53:48 localhost podman[316208]: 2026-02-20 09:53:48.717341958 +0000 UTC m=+0.027914181 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:53:48 localhost dnsmasq[316226]: started, version 2.85 cachesize 150 Feb 20 04:53:48 localhost dnsmasq[316226]: DNS service limited to local subnets Feb 20 04:53:48 localhost dnsmasq[316226]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:53:48 localhost dnsmasq[316226]: warning: no upstream servers configured Feb 20 04:53:48 localhost dnsmasq-dhcp[316226]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 20 04:53:48 localhost dnsmasq[316226]: read /var/lib/neutron/dhcp/0dcc8031-e826-4e7f-94b5-a461dcf3017e/addn_hosts - 0 addresses Feb 20 04:53:48 localhost dnsmasq-dhcp[316226]: read /var/lib/neutron/dhcp/0dcc8031-e826-4e7f-94b5-a461dcf3017e/host Feb 20 04:53:48 localhost dnsmasq-dhcp[316226]: read /var/lib/neutron/dhcp/0dcc8031-e826-4e7f-94b5-a461dcf3017e/opts Feb 20 04:53:49 localhost nova_compute[280804]: 2026-02-20 09:53:49.035 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:49 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:49.035 263745 INFO neutron.agent.dhcp.agent [None req-1d66959a-4bc0-4817-a670-8a415ac9f95e - - - - - -] DHCP configuration for ports {'a9f3844f-205c-4c06-8ca4-2924ed35e2bb'} is completed#033[00m Feb 20 04:53:49 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:53:49 localhost podman[316245]: 2026-02-20 09:53:49.051818335 +0000 UTC m=+0.097820780 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:53:49 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:49 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:49 localhost nova_compute[280804]: 2026-02-20 09:53:49.233 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s Feb 20 04:53:49 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:49.814 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:49Z, description=, device_id=41ba4884-68d8-42b4-b3a7-74960251476b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7ec92be1-9dad-4cf1-b52d-e8bcf8f21c17, ip_allocation=immediate, mac_address=fa:16:3e:03:13:af, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1421, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:53:49Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:53:50 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:50.002 2 INFO neutron.agent.securitygroups_rpc [None req-f7428e6a-a4fd-4f95-a528-55a12406007a 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']#033[00m Feb 20 04:53:50 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:53:50 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:50 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:50 localhost podman[316281]: 2026-02-20 09:53:50.03868827 +0000 UTC m=+0.056830118 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:53:50 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:50.369 263745 INFO neutron.agent.dhcp.agent [None req-c5aa5c19-43a1-43f0-9f96-2f3c98c4d891 - - - - - -] DHCP configuration for ports {'7ec92be1-9dad-4cf1-b52d-e8bcf8f21c17'} is completed#033[00m Feb 20 04:53:51 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:51.254 2 INFO neutron.agent.securitygroups_rpc [None req-e18dac8b-7691-49db-a6f9-a9ef86b93bee 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']#033[00m Feb 20 04:53:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s Feb 20 04:53:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e113 do_prune osdmap full prune enabled Feb 20 04:53:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e114 e114: 6 total, 6 up, 6 in Feb 20 04:53:52 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e114: 6 total, 6 up, 6 in Feb 20 04:53:52 localhost nova_compute[280804]: 2026-02-20 09:53:52.343 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:52.521 2 INFO neutron.agent.securitygroups_rpc [None req-debccdd3-a1fb-4577-8737-97107297a2b7 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']#033[00m Feb 20 04:53:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:53:53 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:53.065 2 INFO neutron.agent.securitygroups_rpc [None req-f48de32a-1488-41ca-9ac7-21eba1b907c6 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']#033[00m Feb 20 04:53:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e114 do_prune osdmap full prune enabled Feb 20 04:53:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e115 e115: 6 total, 6 up, 6 in Feb 20 04:53:53 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e115: 6 total, 6 up, 6 in Feb 20 04:53:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:53:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:53:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 2.7 KiB/s wr, 24 op/s Feb 20 04:53:53 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:53.498 2 INFO neutron.agent.securitygroups_rpc [None req-2540619c-1d57-4e86-a386-76258833753f 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']#033[00m Feb 20 04:53:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:53:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:53:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:53:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:53:53 localhost systemd[1]: tmp-crun.X4hPJU.mount: Deactivated successfully. Feb 20 04:53:53 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:53:53 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:53 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:53 localhost podman[316316]: 2026-02-20 09:53:53.894056305 +0000 UTC m=+0.067343851 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 20 04:53:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:53:54 localhost podman[316331]: 2026-02-20 09:53:53.999543299 +0000 UTC m=+0.079563949 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:53:54 localhost podman[316331]: 2026-02-20 09:53:54.011686235 +0000 UTC m=+0.091706905 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:53:54 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:53:54 localhost nova_compute[280804]: 2026-02-20 09:53:54.236 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:54 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:54.644 2 INFO neutron.agent.securitygroups_rpc [None req-6b4ba7bd-a2d9-4ae3-ba6a-627aca1feb8f 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']#033[00m Feb 20 04:53:54 localhost sshd[316361]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:53:54 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:54.662 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:54Z, description=, device_id=c427ad25-4116-4fb9-846f-941a7de77a1d, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=31822ffb-459c-4d03-8ede-4774a5c92e67, ip_allocation=immediate, mac_address=fa:16:3e:9d:c8:de, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1446, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:53:54Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:53:54 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:54.852 263745 INFO neutron.agent.linux.ip_lib [None req-0461e980-5a7a-41a3-b571-b8cca7d9995e - - - - - -] Device tap92f92a65-59 cannot be used as it has no MAC address#033[00m Feb 20 04:53:54 localhost nova_compute[280804]: 2026-02-20 09:53:54.902 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:54 localhost kernel: device tap92f92a65-59 entered promiscuous mode Feb 20 04:53:54 localhost nova_compute[280804]: 2026-02-20 09:53:54.910 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:54 localhost ovn_controller[155916]: 2026-02-20T09:53:54Z|00153|binding|INFO|Claiming lport 92f92a65-598c-4a2b-9d7c-5dc6bd70bf4d for this chassis. Feb 20 04:53:54 localhost ovn_controller[155916]: 2026-02-20T09:53:54Z|00154|binding|INFO|92f92a65-598c-4a2b-9d7c-5dc6bd70bf4d: Claiming unknown Feb 20 04:53:54 localhost NetworkManager[5967]: [1771581234.9150] manager: (tap92f92a65-59): new Generic device (/org/freedesktop/NetworkManager/Devices/34) Feb 20 04:53:54 localhost systemd-udevd[316403]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:53:54 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:54.923 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-5cb4c8de-841b-49f1-88ff-096863de5c0d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cb4c8de-841b-49f1-88ff-096863de5c0d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ba93760-486b-4fcf-b89a-0ad67bb159b3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92f92a65-598c-4a2b-9d7c-5dc6bd70bf4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:53:54 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:54.924 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 92f92a65-598c-4a2b-9d7c-5dc6bd70bf4d in datapath 5cb4c8de-841b-49f1-88ff-096863de5c0d bound to our chassis#033[00m Feb 20 04:53:54 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:54.925 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5cb4c8de-841b-49f1-88ff-096863de5c0d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:53:54 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:54.926 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[865bc9b0-22dc-469e-a3a4-1ccaa739040f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:53:54 localhost podman[316387]: 2026-02-20 09:53:54.930843381 +0000 UTC m=+0.070069203 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127) Feb 20 04:53:54 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:53:54 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:54 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:54 localhost ovn_controller[155916]: 2026-02-20T09:53:54Z|00155|binding|INFO|Setting lport 92f92a65-598c-4a2b-9d7c-5dc6bd70bf4d ovn-installed in OVS Feb 20 04:53:54 localhost ovn_controller[155916]: 2026-02-20T09:53:54Z|00156|binding|INFO|Setting lport 92f92a65-598c-4a2b-9d7c-5dc6bd70bf4d up in Southbound Feb 20 04:53:54 localhost nova_compute[280804]: 2026-02-20 09:53:54.959 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:55 localhost nova_compute[280804]: 2026-02-20 09:53:55.006 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:55 localhost nova_compute[280804]: 2026-02-20 09:53:55.033 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:55 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:55.143 263745 INFO neutron.agent.dhcp.agent [None req-3d925ef6-41da-43dc-8fcb-9eeeee73fd22 - - - - - -] DHCP configuration for ports {'31822ffb-459c-4d03-8ede-4774a5c92e67'} is completed#033[00m Feb 20 04:53:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e115 do_prune osdmap full prune enabled Feb 20 04:53:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e116 e116: 6 total, 6 up, 6 in Feb 20 04:53:55 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e116: 6 total, 6 up, 6 in Feb 20 04:53:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 257 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 19 MiB/s wr, 50 op/s Feb 20 04:53:55 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:55.616 2 INFO neutron.agent.securitygroups_rpc [None req-11ea8750-a0ca-4484-988a-d6e41cea3e7c 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']#033[00m Feb 20 04:53:55 localhost ovn_controller[155916]: 2026-02-20T09:53:55Z|00157|binding|INFO|Removing iface tap92f92a65-59 ovn-installed in OVS Feb 20 04:53:55 localhost ovn_controller[155916]: 2026-02-20T09:53:55Z|00158|binding|INFO|Removing lport 92f92a65-598c-4a2b-9d7c-5dc6bd70bf4d ovn-installed in OVS Feb 20 04:53:55 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:55.620 161766 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 9f3aed89-0556-45db-9dac-8c9baba9bcac with type ""#033[00m Feb 20 04:53:55 localhost nova_compute[280804]: 2026-02-20 09:53:55.620 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:55 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:55.622 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-5cb4c8de-841b-49f1-88ff-096863de5c0d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cb4c8de-841b-49f1-88ff-096863de5c0d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ba93760-486b-4fcf-b89a-0ad67bb159b3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92f92a65-598c-4a2b-9d7c-5dc6bd70bf4d) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:53:55 localhost nova_compute[280804]: 2026-02-20 09:53:55.626 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:55 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:55.628 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 92f92a65-598c-4a2b-9d7c-5dc6bd70bf4d in datapath 5cb4c8de-841b-49f1-88ff-096863de5c0d unbound from our chassis#033[00m Feb 20 04:53:55 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:55.629 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5cb4c8de-841b-49f1-88ff-096863de5c0d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:53:55 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:55.630 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[557d45d0-9338-4b35-8950-0e987164370a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:53:55 localhost podman[316465]: Feb 20 04:53:55 localhost podman[316465]: 2026-02-20 09:53:55.820813113 +0000 UTC m=+0.091078828 container create b2374b8cdb9e0648eb5b0829810709999214284d39d2815c109ddefab01b2672 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5cb4c8de-841b-49f1-88ff-096863de5c0d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:53:55 localhost systemd[1]: Started libpod-conmon-b2374b8cdb9e0648eb5b0829810709999214284d39d2815c109ddefab01b2672.scope. Feb 20 04:53:55 localhost podman[316465]: 2026-02-20 09:53:55.77562688 +0000 UTC m=+0.045892635 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:53:55 localhost systemd[1]: Started libcrun container. Feb 20 04:53:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fd96e3ed2ac3e24ec77718fed9ad2f2db332b9706e46530e2663e9cb70c103/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:53:55 localhost podman[316465]: 2026-02-20 09:53:55.895811618 +0000 UTC m=+0.166077343 container init b2374b8cdb9e0648eb5b0829810709999214284d39d2815c109ddefab01b2672 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5cb4c8de-841b-49f1-88ff-096863de5c0d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 20 04:53:55 localhost podman[316465]: 2026-02-20 09:53:55.908058017 +0000 UTC m=+0.178323742 container start b2374b8cdb9e0648eb5b0829810709999214284d39d2815c109ddefab01b2672 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5cb4c8de-841b-49f1-88ff-096863de5c0d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS) Feb 20 04:53:55 localhost dnsmasq[316483]: started, version 2.85 cachesize 150 Feb 20 04:53:55 localhost dnsmasq[316483]: DNS service limited to local subnets Feb 20 04:53:55 localhost nova_compute[280804]: 2026-02-20 09:53:55.912 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:55 localhost dnsmasq[316483]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:53:55 localhost dnsmasq[316483]: warning: no upstream servers configured Feb 20 04:53:55 localhost dnsmasq-dhcp[316483]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 20 04:53:55 localhost dnsmasq[316483]: read /var/lib/neutron/dhcp/5cb4c8de-841b-49f1-88ff-096863de5c0d/addn_hosts - 0 addresses Feb 20 04:53:55 localhost dnsmasq-dhcp[316483]: read /var/lib/neutron/dhcp/5cb4c8de-841b-49f1-88ff-096863de5c0d/host Feb 20 04:53:55 localhost dnsmasq-dhcp[316483]: read /var/lib/neutron/dhcp/5cb4c8de-841b-49f1-88ff-096863de5c0d/opts Feb 20 04:53:56 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:56.067 263745 INFO neutron.agent.dhcp.agent [None req-efef68a1-d67d-4bbe-a4a9-6bff9ea5b8fe - - - - - -] DHCP configuration for ports {'4b1e52cd-5457-4186-94e3-c3834b65a36a'} is completed#033[00m Feb 20 04:53:56 localhost dnsmasq[316483]: exiting on receipt of SIGTERM Feb 20 04:53:56 localhost podman[316501]: 2026-02-20 09:53:56.120718551 +0000 UTC m=+0.061216436 container kill b2374b8cdb9e0648eb5b0829810709999214284d39d2815c109ddefab01b2672 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5cb4c8de-841b-49f1-88ff-096863de5c0d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:53:56 localhost systemd[1]: tmp-crun.MubH2g.mount: Deactivated successfully. Feb 20 04:53:56 localhost systemd[1]: libpod-b2374b8cdb9e0648eb5b0829810709999214284d39d2815c109ddefab01b2672.scope: Deactivated successfully. Feb 20 04:53:56 localhost podman[316514]: 2026-02-20 09:53:56.190285451 +0000 UTC m=+0.057424975 container died b2374b8cdb9e0648eb5b0829810709999214284d39d2815c109ddefab01b2672 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5cb4c8de-841b-49f1-88ff-096863de5c0d, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 20 04:53:56 localhost podman[316514]: 2026-02-20 09:53:56.218984341 +0000 UTC m=+0.086123815 container cleanup b2374b8cdb9e0648eb5b0829810709999214284d39d2815c109ddefab01b2672 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5cb4c8de-841b-49f1-88ff-096863de5c0d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 20 04:53:56 localhost systemd[1]: libpod-conmon-b2374b8cdb9e0648eb5b0829810709999214284d39d2815c109ddefab01b2672.scope: Deactivated successfully. Feb 20 04:53:56 localhost podman[316516]: 2026-02-20 09:53:56.275759407 +0000 UTC m=+0.133695074 container remove b2374b8cdb9e0648eb5b0829810709999214284d39d2815c109ddefab01b2672 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5cb4c8de-841b-49f1-88ff-096863de5c0d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:53:56 localhost nova_compute[280804]: 2026-02-20 09:53:56.287 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:56 localhost kernel: device tap92f92a65-59 left promiscuous mode Feb 20 04:53:56 localhost nova_compute[280804]: 2026-02-20 09:53:56.300 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e116 do_prune osdmap full prune enabled Feb 20 04:53:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e117 e117: 6 total, 6 up, 6 in Feb 20 04:53:56 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e117: 6 total, 6 up, 6 in Feb 20 04:53:56 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:56.474 263745 INFO neutron.agent.dhcp.agent [None req-5046fbb7-53e8-4163-8ef3-0136a28f3c3d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:53:56 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:56.475 263745 INFO neutron.agent.dhcp.agent [None req-5046fbb7-53e8-4163-8ef3-0136a28f3c3d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:53:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:56.516 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:53:56 localhost nova_compute[280804]: 2026-02-20 09:53:56.516 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:56.518 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 04:53:56 localhost neutron_sriov_agent[256551]: 2026-02-20 09:53:56.523 2 INFO neutron.agent.securitygroups_rpc [None req-2195fa93-d724-491c-94cb-1a1a7da48d3e 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']#033[00m Feb 20 04:53:56 localhost systemd[1]: var-lib-containers-storage-overlay-e4fd96e3ed2ac3e24ec77718fed9ad2f2db332b9706e46530e2663e9cb70c103-merged.mount: Deactivated successfully. Feb 20 04:53:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2374b8cdb9e0648eb5b0829810709999214284d39d2815c109ddefab01b2672-userdata-shm.mount: Deactivated successfully. Feb 20 04:53:56 localhost systemd[1]: run-netns-qdhcp\x2d5cb4c8de\x2d841b\x2d49f1\x2d88ff\x2d096863de5c0d.mount: Deactivated successfully. Feb 20 04:53:57 localhost nova_compute[280804]: 2026-02-20 09:53:57.385 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e117 do_prune osdmap full prune enabled Feb 20 04:53:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e118 e118: 6 total, 6 up, 6 in Feb 20 04:53:57 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e118: 6 total, 6 up, 6 in Feb 20 04:53:57 localhost systemd[1]: tmp-crun.CvcqNd.mount: Deactivated successfully. Feb 20 04:53:57 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:53:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 257 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 27 MiB/s wr, 71 op/s Feb 20 04:53:57 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:57 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:57 localhost podman[316562]: 2026-02-20 09:53:57.467625341 +0000 UTC m=+0.055185904 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 20 04:53:57 localhost nova_compute[280804]: 2026-02-20 09:53:57.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:53:57 localhost ovn_metadata_agent[161761]: 2026-02-20 09:53:57.520 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:53:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:53:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e118 do_prune osdmap full prune enabled Feb 20 04:53:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e119 e119: 6 total, 6 up, 6 in Feb 20 04:53:57 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e119: 6 total, 6 up, 6 in Feb 20 04:53:58 localhost openstack_network_exporter[243776]: ERROR 09:53:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:53:58 localhost openstack_network_exporter[243776]: Feb 20 04:53:58 localhost openstack_network_exporter[243776]: ERROR 09:53:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:53:58 localhost openstack_network_exporter[243776]: Feb 20 04:53:58 localhost nova_compute[280804]: 2026-02-20 09:53:58.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:53:58 localhost nova_compute[280804]: 2026-02-20 09:53:58.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:53:58 localhost nova_compute[280804]: 2026-02-20 09:53:58.512 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:53:58 localhost nova_compute[280804]: 2026-02-20 09:53:58.529 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:53:58 localhost nova_compute[280804]: 2026-02-20 09:53:58.530 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:53:58 localhost nova_compute[280804]: 2026-02-20 09:53:58.530 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:53:58 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:58.779 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:58Z, description=, device_id=6dedb0bf-85e9-4a5c-a1c1-aaef1acaf801, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8b5183fc-ac7c-4ef7-97db-ce83ab945c7a, ip_allocation=immediate, mac_address=fa:16:3e:0b:04:fa, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1464, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:53:58Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:53:58 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:53:58 localhost podman[316597]: 2026-02-20 09:53:58.997127536 +0000 UTC m=+0.061846384 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:53:58 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:53:58 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:53:59 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:53:59.228 263745 INFO neutron.agent.dhcp.agent [None req-b20559c2-9656-4f25-a615-55c32fac6e81 - - - - - -] DHCP configuration for ports {'8b5183fc-ac7c-4ef7-97db-ce83ab945c7a'} is completed#033[00m Feb 20 04:53:59 localhost nova_compute[280804]: 2026-02-20 09:53:59.238 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:53:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 169 MiB data, 855 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 7.4 KiB/s wr, 88 op/s Feb 20 04:53:59 localhost sshd[316617]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:54:00 localhost nova_compute[280804]: 2026-02-20 09:54:00.525 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:54:01 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:54:01 localhost podman[316636]: 2026-02-20 09:54:01.324025634 +0000 UTC m=+0.051743631 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:54:01 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:54:01 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:54:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v200: 177 pgs: 177 active+clean; 145 MiB data, 751 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 7.7 KiB/s wr, 99 op/s Feb 20 04:54:01 localhost nova_compute[280804]: 2026-02-20 09:54:01.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:54:01 localhost nova_compute[280804]: 2026-02-20 09:54:01.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:54:01 localhost nova_compute[280804]: 2026-02-20 09:54:01.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:54:01 localhost nova_compute[280804]: 2026-02-20 09:54:01.531 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:54:01 localhost nova_compute[280804]: 2026-02-20 09:54:01.532 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:54:01 localhost nova_compute[280804]: 2026-02-20 09:54:01.532 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:54:01 localhost nova_compute[280804]: 2026-02-20 09:54:01.532 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:54:01 localhost nova_compute[280804]: 2026-02-20 09:54:01.533 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:54:01 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:54:01 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3721321372' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:54:01 localhost nova_compute[280804]: 2026-02-20 09:54:01.971 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.136 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.137 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11536MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.137 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.137 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.436 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:02 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:02.437 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:54:02Z, description=, device_id=730ef6be-5d23-41cd-8279-605bbf79f5f0, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9f19e929-465a-4dfc-b006-f23085dfc5cb, ip_allocation=immediate, mac_address=fa:16:3e:8c:f6:e0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1486, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:54:02Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.465 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.465 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.481 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:54:02 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:54:02 localhost podman[316695]: 2026-02-20 09:54:02.649452336 +0000 UTC m=+0.063329182 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:54:02 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:54:02 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:54:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:54:02 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:02.898 263745 INFO neutron.agent.dhcp.agent [None req-e7741a99-11fd-4e6d-a6d5-52497aad6b10 - - - - - -] DHCP configuration for ports {'9f19e929-465a-4dfc-b006-f23085dfc5cb'} is completed#033[00m Feb 20 04:54:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:54:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1336582611' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.915 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.921 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.939 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.942 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:54:02 localhost nova_compute[280804]: 2026-02-20 09:54:02.943 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:54:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 145 MiB data, 751 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 6.5 KiB/s wr, 85 op/s Feb 20 04:54:03 localhost nova_compute[280804]: 2026-02-20 09:54:03.944 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:54:03 localhost nova_compute[280804]: 2026-02-20 09:54:03.969 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:54:03 localhost nova_compute[280804]: 2026-02-20 09:54:03.970 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:54:04 localhost nova_compute[280804]: 2026-02-20 09:54:04.240 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:54:04 localhost systemd[1]: tmp-crun.9jIPo1.mount: Deactivated successfully. Feb 20 04:54:04 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:54:04 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:54:04 localhost podman[316768]: 2026-02-20 09:54:04.490292576 +0000 UTC m=+0.076993789 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:54:04 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:54:04 localhost podman[316750]: 2026-02-20 09:54:04.514671681 +0000 UTC m=+0.144726980 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1770267347, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:54:04 localhost podman[316750]: 2026-02-20 09:54:04.555868688 +0000 UTC m=+0.185923997 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, vcs-type=git, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9) Feb 20 04:54:04 localhost podman[316751]: 2026-02-20 09:54:04.466432875 +0000 UTC m=+0.092174107 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3) Feb 20 04:54:04 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:54:04 localhost podman[316751]: 2026-02-20 09:54:04.600879188 +0000 UTC m=+0.226620470 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute) Feb 20 04:54:04 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:54:04 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Feb 20 04:54:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 145 MiB data, 751 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 6.0 KiB/s wr, 92 op/s Feb 20 04:54:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:05.664 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:54:05Z, description=, device_id=9734e83b-d891-4727-96b4-334586d9c3c0, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a8a274af-7fc2-4ba9-8bcc-d7d4f86e91e9, ip_allocation=immediate, mac_address=fa:16:3e:40:a1:62, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1503, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:54:05Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:54:05 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:54:05 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:54:05 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:54:05 localhost podman[316826]: 2026-02-20 09:54:05.888761182 +0000 UTC m=+0.054152327 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:54:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:05.919 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:54:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:05.920 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:54:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:05.920 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:54:06 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:06.349 263745 INFO neutron.agent.dhcp.agent [None req-ae9cd8b7-7c87-4a83-b12f-c42d005dc073 - - - - - -] DHCP configuration for ports {'a8a274af-7fc2-4ba9-8bcc-d7d4f86e91e9'} is completed#033[00m Feb 20 04:54:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e119 do_prune osdmap full prune enabled Feb 20 04:54:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e120 e120: 6 total, 6 up, 6 in Feb 20 04:54:07 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e120: 6 total, 6 up, 6 in Feb 20 04:54:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 751 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 4.9 KiB/s wr, 75 op/s Feb 20 04:54:07 localhost nova_compute[280804]: 2026-02-20 09:54:07.469 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:54:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e120 do_prune osdmap full prune enabled Feb 20 04:54:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e121 e121: 6 total, 6 up, 6 in Feb 20 04:54:07 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e121: 6 total, 6 up, 6 in Feb 20 04:54:08 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:08.501 2 INFO neutron.agent.securitygroups_rpc [None req-b9c4f92c-e0aa-4ddd-a393-48b8bc5d6b0b 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['b7daa996-a450-47f2-a46b-44613b415203']#033[00m Feb 20 04:54:08 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:08.652 2 INFO neutron.agent.securitygroups_rpc [None req-fa8e1861-ba97-4550-97ae-0d61a37c286d 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['b7daa996-a450-47f2-a46b-44613b415203']#033[00m Feb 20 04:54:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e121 do_prune osdmap full prune enabled Feb 20 04:54:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e122 e122: 6 total, 6 up, 6 in Feb 20 04:54:08 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e122: 6 total, 6 up, 6 in Feb 20 04:54:08 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:54:08 localhost podman[316864]: 2026-02-20 09:54:08.756076291 +0000 UTC m=+0.069703924 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 20 04:54:08 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:54:08 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:54:09 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:09.149 2 INFO neutron.agent.securitygroups_rpc [None req-60615bbe-69b8-4a6d-a777-f3532d76e589 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']#033[00m Feb 20 04:54:09 localhost nova_compute[280804]: 2026-02-20 09:54:09.274 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 751 MiB used, 41 GiB / 42 GiB avail; 106 KiB/s rd, 7.2 KiB/s wr, 143 op/s Feb 20 04:54:10 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:10.732 2 INFO neutron.agent.securitygroups_rpc [None req-05c5180a-791e-4a36-b283-1b3700162f32 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']#033[00m Feb 20 04:54:11 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:11.078 2 INFO neutron.agent.securitygroups_rpc [None req-fda66745-7058-43a6-bd22-7e541948feca 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']#033[00m Feb 20 04:54:11 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:11.382 2 INFO neutron.agent.securitygroups_rpc [None req-484ad5d4-98d6-44d1-ade8-e83a00451c3f 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']#033[00m Feb 20 04:54:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 110 KiB/s rd, 6.8 KiB/s wr, 151 op/s Feb 20 04:54:11 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:11.855 2 INFO neutron.agent.securitygroups_rpc [None req-d28a91c5-19c2-44e5-9f1d-5b67d2d402bb 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']#033[00m Feb 20 04:54:12 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:12.055 2 INFO neutron.agent.securitygroups_rpc [None req-afec8fb8-e434-475a-b1f9-4fa41fdb543f 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']#033[00m Feb 20 04:54:12 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:12.320 2 INFO neutron.agent.securitygroups_rpc [None req-ce85a640-3696-4e3b-b081-e77c6a0d5165 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']#033[00m Feb 20 04:54:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:54:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:54:12 localhost systemd[1]: tmp-crun.8Q8jJO.mount: Deactivated successfully. Feb 20 04:54:12 localhost podman[316884]: 2026-02-20 09:54:12.454731397 +0000 UTC m=+0.090741609 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:54:12 localhost nova_compute[280804]: 2026-02-20 09:54:12.471 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:12 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:12.482 2 INFO neutron.agent.securitygroups_rpc [None req-2e25a288-cc52-4176-8755-11e2b4f58624 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']#033[00m Feb 20 04:54:12 localhost podman[316885]: 2026-02-20 09:54:12.504221657 +0000 UTC m=+0.138490472 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent) Feb 20 04:54:12 localhost podman[316885]: 2026-02-20 09:54:12.509346685 +0000 UTC m=+0.143615550 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent) Feb 20 04:54:12 localhost podman[316884]: 2026-02-20 09:54:12.519063345 +0000 UTC m=+0.155073547 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:54:12 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:54:12 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:54:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:54:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e122 do_prune osdmap full prune enabled Feb 20 04:54:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e123 e123: 6 total, 6 up, 6 in Feb 20 04:54:12 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e123: 6 total, 6 up, 6 in Feb 20 04:54:12 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:12.779 2 INFO neutron.agent.securitygroups_rpc [None req-8a952836-35e7-4a3b-8a56-14552dc3796b 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']#033[00m Feb 20 04:54:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 110 KiB/s rd, 6.8 KiB/s wr, 151 op/s Feb 20 04:54:13 localhost sshd[316926]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:54:13 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:13.698 2 INFO neutron.agent.securitygroups_rpc [None req-72cb9bc1-a00e-467b-ba2c-70207cf7bcb5 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']#033[00m Feb 20 04:54:14 localhost nova_compute[280804]: 2026-02-20 09:54:14.323 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:14 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:14.485 2 INFO neutron.agent.securitygroups_rpc [None req-0e571d4b-2938-4d22-abb1-e6b913186df6 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['6b2659dc-8adf-40b4-b971-7bc179be3dc5']#033[00m Feb 20 04:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:54:15 localhost podman[316928]: 2026-02-20 09:54:15.437449138 +0000 UTC m=+0.077666258 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:54:15 localhost podman[316928]: 2026-02-20 09:54:15.450754666 +0000 UTC m=+0.090971786 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:54:15 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:54:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 5.2 KiB/s wr, 116 op/s Feb 20 04:54:16 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:16.042 2 INFO neutron.agent.securitygroups_rpc [None req-9065899c-9819-48e8-b360-946a69906bd9 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['56cc5e29-9f6f-4f35-9ade-42d618bdd35b']#033[00m Feb 20 04:54:16 localhost podman[241347]: time="2026-02-20T09:54:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:54:16 localhost podman[241347]: @ - - [20/Feb/2026:09:54:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 161350 "" "Go-http-client/1.1" Feb 20 04:54:16 localhost podman[241347]: @ - - [20/Feb/2026:09:54:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19720 "" "Go-http-client/1.1" Feb 20 04:54:16 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:16.290 2 INFO neutron.agent.securitygroups_rpc [None req-00b37041-3488-4088-b5e4-b424dd1f63aa 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['56cc5e29-9f6f-4f35-9ade-42d618bdd35b']#033[00m Feb 20 04:54:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 75 KiB/s rd, 4.7 KiB/s wr, 103 op/s Feb 20 04:54:17 localhost nova_compute[280804]: 2026-02-20 09:54:17.474 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:17 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:17.520 2 INFO neutron.agent.securitygroups_rpc [None req-bee57eed-2319-4f09-8acf-ebe401de5df5 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['39c9ea95-070c-4bc4-9287-e329c91de991']#033[00m Feb 20 04:54:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:54:18 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:18.296 2 INFO neutron.agent.securitygroups_rpc [None req-63f4c14d-d8d8-4887-9902-5c1bd910d46b 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['39c9ea95-070c-4bc4-9287-e329c91de991']#033[00m Feb 20 04:54:19 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:19.293 2 INFO neutron.agent.securitygroups_rpc [None req-0eb3d32b-3601-4bfa-a4ec-07263fd65bfe 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['92902041-7601-4e4f-984f-f40e84e5d87a']#033[00m Feb 20 04:54:19 localhost nova_compute[280804]: 2026-02-20 09:54:19.369 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 409 B/s wr, 20 op/s Feb 20 04:54:19 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:19.702 2 INFO neutron.agent.securitygroups_rpc [None req-c48149ca-5e2e-4ce1-9578-04651a281b9c 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['92902041-7601-4e4f-984f-f40e84e5d87a']#033[00m Feb 20 04:54:20 localhost podman[316968]: 2026-02-20 09:54:20.011112615 +0000 UTC m=+0.057740883 container kill 4727377b0d012f14b5c0b728dabdcc9762cac60b3a0fbeb1d180aedad4e3a031 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0dcc8031-e826-4e7f-94b5-a461dcf3017e, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:54:20 localhost dnsmasq[316226]: exiting on receipt of SIGTERM Feb 20 04:54:20 localhost systemd[1]: libpod-4727377b0d012f14b5c0b728dabdcc9762cac60b3a0fbeb1d180aedad4e3a031.scope: Deactivated successfully. Feb 20 04:54:20 localhost podman[316982]: 2026-02-20 09:54:20.088285708 +0000 UTC m=+0.058449662 container died 4727377b0d012f14b5c0b728dabdcc9762cac60b3a0fbeb1d180aedad4e3a031 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0dcc8031-e826-4e7f-94b5-a461dcf3017e, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:54:20 localhost systemd[1]: tmp-crun.kDZVVb.mount: Deactivated successfully. Feb 20 04:54:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4727377b0d012f14b5c0b728dabdcc9762cac60b3a0fbeb1d180aedad4e3a031-userdata-shm.mount: Deactivated successfully. Feb 20 04:54:20 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:20.123 2 INFO neutron.agent.securitygroups_rpc [None req-db7ac1b3-e635-4b44-b582-fc808315e3e2 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['92902041-7601-4e4f-984f-f40e84e5d87a']#033[00m Feb 20 04:54:20 localhost podman[316982]: 2026-02-20 09:54:20.145046453 +0000 UTC m=+0.115210367 container remove 4727377b0d012f14b5c0b728dabdcc9762cac60b3a0fbeb1d180aedad4e3a031 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0dcc8031-e826-4e7f-94b5-a461dcf3017e, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:54:20 localhost systemd[1]: libpod-conmon-4727377b0d012f14b5c0b728dabdcc9762cac60b3a0fbeb1d180aedad4e3a031.scope: Deactivated successfully. Feb 20 04:54:20 localhost ovn_controller[155916]: 2026-02-20T09:54:20Z|00159|binding|INFO|Releasing lport ef5e03c2-67e1-458e-8512-d3edab09dd6b from this chassis (sb_readonly=0) Feb 20 04:54:20 localhost kernel: device tapef5e03c2-67 left promiscuous mode Feb 20 04:54:20 localhost nova_compute[280804]: 2026-02-20 09:54:20.166 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:20 localhost ovn_controller[155916]: 2026-02-20T09:54:20Z|00160|binding|INFO|Setting lport ef5e03c2-67e1-458e-8512-d3edab09dd6b down in Southbound Feb 20 04:54:20 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:20.176 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-0dcc8031-e826-4e7f-94b5-a461dcf3017e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0dcc8031-e826-4e7f-94b5-a461dcf3017e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3604cf69-2dd7-4dc4-bcc1-55349285fbc2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ef5e03c2-67e1-458e-8512-d3edab09dd6b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:54:20 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:20.178 161766 INFO neutron.agent.ovn.metadata.agent [-] Port ef5e03c2-67e1-458e-8512-d3edab09dd6b in datapath 0dcc8031-e826-4e7f-94b5-a461dcf3017e unbound from our chassis#033[00m Feb 20 04:54:20 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:20.179 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0dcc8031-e826-4e7f-94b5-a461dcf3017e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:54:20 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:20.180 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[90e19eae-4d9a-48df-98a9-5d27ecb18460]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:54:20 localhost nova_compute[280804]: 2026-02-20 09:54:20.188 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:20 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:20.220 263745 INFO neutron.agent.dhcp.agent [None req-103b8554-cd3d-4ca4-a4e4-49a85746f4a5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:54:20 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:20.442 2 INFO neutron.agent.securitygroups_rpc [None req-59de0fbc-5d34-4b72-b4ef-ac2a7c9b1e9d 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['92902041-7601-4e4f-984f-f40e84e5d87a']#033[00m Feb 20 04:54:20 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:20.452 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:54:20 localhost nova_compute[280804]: 2026-02-20 09:54:20.679 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:20 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:20.686 2 INFO neutron.agent.securitygroups_rpc [None req-cd9b335d-ac52-4aea-badd-19c0c11ff6a7 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['92902041-7601-4e4f-984f-f40e84e5d87a']#033[00m Feb 20 04:54:21 localhost systemd[1]: var-lib-containers-storage-overlay-64cafc97e97665fa95075f2860b827c229dad7c7a254b46c59e2c32662414720-merged.mount: Deactivated successfully. Feb 20 04:54:21 localhost systemd[1]: run-netns-qdhcp\x2d0dcc8031\x2de826\x2d4e7f\x2d94b5\x2da461dcf3017e.mount: Deactivated successfully. Feb 20 04:54:21 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:21.041 2 INFO neutron.agent.securitygroups_rpc [None req-eaa2444c-9904-4bf6-912c-ce544aec0944 a7bf5bd2a08b40838819ba2189f49015 8affd803090342c7a0b4a8c10fbcda95 - - default default] Security group member updated ['863be2ee-f2c8-49ce-9c0f-4d58bca5441c']#033[00m Feb 20 04:54:21 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:21.199 2 INFO neutron.agent.securitygroups_rpc [None req-eef0b483-52c5-455d-8e50-fdf7323c6cd9 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['92902041-7601-4e4f-984f-f40e84e5d87a']#033[00m Feb 20 04:54:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 716 B/s wr, 16 op/s Feb 20 04:54:22 localhost podman[317022]: 2026-02-20 09:54:22.410090831 +0000 UTC m=+0.066419786 container kill f77cf99bb52fcc59851208e4ba45e6b077d81391c36409c07f231dc03237a964 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80eb099a-71b4-4d12-b8bf-3ac8a35dcbab, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Feb 20 04:54:22 localhost dnsmasq[316057]: exiting on receipt of SIGTERM Feb 20 04:54:22 localhost systemd[1]: libpod-f77cf99bb52fcc59851208e4ba45e6b077d81391c36409c07f231dc03237a964.scope: Deactivated successfully. Feb 20 04:54:22 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:22.469 2 INFO neutron.agent.securitygroups_rpc [None req-5a5ae7af-13a6-42c5-a0fc-7ba1957a4294 b9d64681c327441a81dfa771b4b413f6 ce97c44a73f94ada962654654798a4af - - default default] Security group member updated ['203b95e6-8f62-4037-821a-d64a45daeaf8']#033[00m Feb 20 04:54:22 localhost podman[317036]: 2026-02-20 09:54:22.480479692 +0000 UTC m=+0.053864028 container died f77cf99bb52fcc59851208e4ba45e6b077d81391c36409c07f231dc03237a964 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80eb099a-71b4-4d12-b8bf-3ac8a35dcbab, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:54:22 localhost nova_compute[280804]: 2026-02-20 09:54:22.522 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:22 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:22.524 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:54:22Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5f2bd025-fef7-441e-a735-77a6a73fbd4d, ip_allocation=immediate, mac_address=fa:16:3e:37:56:3a, name=tempest-RoutersAdminNegativeIpV6Test-93735918, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=True, project_id=ce97c44a73f94ada962654654798a4af, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['203b95e6-8f62-4037-821a-d64a45daeaf8'], standard_attr_id=1671, status=DOWN, tags=[], tenant_id=ce97c44a73f94ada962654654798a4af, updated_at=2026-02-20T09:54:22Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:54:22 localhost systemd[1]: tmp-crun.41tN60.mount: Deactivated successfully. Feb 20 04:54:22 localhost podman[317036]: 2026-02-20 09:54:22.549186968 +0000 UTC m=+0.122571254 container cleanup f77cf99bb52fcc59851208e4ba45e6b077d81391c36409c07f231dc03237a964 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80eb099a-71b4-4d12-b8bf-3ac8a35dcbab, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:54:22 localhost systemd[1]: libpod-conmon-f77cf99bb52fcc59851208e4ba45e6b077d81391c36409c07f231dc03237a964.scope: Deactivated successfully. Feb 20 04:54:22 localhost podman[317037]: 2026-02-20 09:54:22.56898173 +0000 UTC m=+0.137734931 container remove f77cf99bb52fcc59851208e4ba45e6b077d81391c36409c07f231dc03237a964 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80eb099a-71b4-4d12-b8bf-3ac8a35dcbab, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:54:22 localhost ovn_controller[155916]: 2026-02-20T09:54:22Z|00161|binding|INFO|Releasing lport 704a8f4e-e5a8-4d6e-adcf-92cb3e12e561 from this chassis (sb_readonly=0) Feb 20 04:54:22 localhost nova_compute[280804]: 2026-02-20 09:54:22.577 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:22 localhost ovn_controller[155916]: 2026-02-20T09:54:22Z|00162|binding|INFO|Setting lport 704a8f4e-e5a8-4d6e-adcf-92cb3e12e561 down in Southbound Feb 20 04:54:22 localhost kernel: device tap704a8f4e-e5 left promiscuous mode Feb 20 04:54:22 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:22.590 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-80eb099a-71b4-4d12-b8bf-3ac8a35dcbab', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80eb099a-71b4-4d12-b8bf-3ac8a35dcbab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=650ba84b-ca25-4971-89b7-5e62694b560a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=704a8f4e-e5a8-4d6e-adcf-92cb3e12e561) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:54:22 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:22.592 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 704a8f4e-e5a8-4d6e-adcf-92cb3e12e561 in datapath 80eb099a-71b4-4d12-b8bf-3ac8a35dcbab unbound from our chassis#033[00m Feb 20 04:54:22 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:22.594 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 80eb099a-71b4-4d12-b8bf-3ac8a35dcbab or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:54:22 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:22.595 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[c2600ffb-9f45-4ab5-9dcd-57b6a01f8b5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:54:22 localhost nova_compute[280804]: 2026-02-20 09:54:22.596 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:22 localhost nova_compute[280804]: 2026-02-20 09:54:22.598 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:54:22 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:54:22 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:54:22 localhost podman[317083]: 2026-02-20 09:54:22.761078601 +0000 UTC m=+0.057394783 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:54:22 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:54:22 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:22.847 263745 INFO neutron.agent.dhcp.agent [None req-0db3e614-39a4-49ca-8e97-a8c621ffad83 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:54:22 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:22.952 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:54:22 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:22.967 2 INFO neutron.agent.securitygroups_rpc [None req-ec64d548-962f-41dd-a02f-4a25f3310f4f a7bf5bd2a08b40838819ba2189f49015 8affd803090342c7a0b4a8c10fbcda95 - - default default] Security group member updated ['863be2ee-f2c8-49ce-9c0f-4d58bca5441c']#033[00m Feb 20 04:54:22 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:22.986 263745 INFO neutron.agent.dhcp.agent [None req-4e0cc087-fefe-4cac-926d-9bc231ee977b - - - - - -] DHCP configuration for ports {'5f2bd025-fef7-441e-a735-77a6a73fbd4d'} is completed#033[00m Feb 20 04:54:23 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:23.163 2 INFO neutron.agent.securitygroups_rpc [None req-a78e376a-5faf-4597-9e34-68a60251f328 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['11123030-cb07-4b38-85fd-08bf79b16579']#033[00m Feb 20 04:54:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:54:23 Feb 20 04:54:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:54:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 04:54:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['vms', '.mgr', 'volumes', 'backups', 'images', 'manila_data', 'manila_metadata'] Feb 20 04:54:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 04:54:23 localhost systemd[1]: var-lib-containers-storage-overlay-1cfaab59943e6769fda93fbc1b2cb802e47cf20cc521e81d34e7ffc0588b718a-merged.mount: Deactivated successfully. Feb 20 04:54:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f77cf99bb52fcc59851208e4ba45e6b077d81391c36409c07f231dc03237a964-userdata-shm.mount: Deactivated successfully. Feb 20 04:54:23 localhost systemd[1]: run-netns-qdhcp\x2d80eb099a\x2d71b4\x2d4d12\x2db8bf\x2d3ac8a35dcbab.mount: Deactivated successfully. Feb 20 04:54:23 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:23.439 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:54:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:54:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:54:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 663 B/s wr, 15 op/s Feb 20 04:54:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:54:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:54:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:54:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 7.258030119734475e-05 quantized to 32 (current 32) Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:54:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Feb 20 04:54:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:54:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:54:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:54:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:54:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:54:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:54:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:54:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:54:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:54:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:54:24 localhost nova_compute[280804]: 2026-02-20 09:54:24.073 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:24 localhost nova_compute[280804]: 2026-02-20 09:54:24.372 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:24 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:24.633 2 INFO neutron.agent.securitygroups_rpc [None req-f54c4838-5ce8-45ff-b090-12b4bbbb882f a7bf5bd2a08b40838819ba2189f49015 8affd803090342c7a0b4a8c10fbcda95 - - default default] Security group member updated ['863be2ee-f2c8-49ce-9c0f-4d58bca5441c']#033[00m Feb 20 04:54:24 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:24.634 2 INFO neutron.agent.securitygroups_rpc [None req-5c523cd2-f346-4abd-990f-316e1d877f9a b9d64681c327441a81dfa771b4b413f6 ce97c44a73f94ada962654654798a4af - - default default] Security group member updated ['203b95e6-8f62-4037-821a-d64a45daeaf8']#033[00m Feb 20 04:54:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:54:24 localhost podman[317103]: 2026-02-20 09:54:24.731775141 +0000 UTC m=+0.082140758 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:54:24 localhost podman[317103]: 2026-02-20 09:54:24.744690438 +0000 UTC m=+0.095055995 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:54:24 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:54:24 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:54:24 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:54:24 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:54:24 localhost podman[317143]: 2026-02-20 09:54:24.901329036 +0000 UTC m=+0.065836730 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127) Feb 20 04:54:24 localhost systemd[1]: tmp-crun.N1Rw9B.mount: Deactivated successfully. Feb 20 04:54:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Feb 20 04:54:25 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:25.536 2 INFO neutron.agent.securitygroups_rpc [None req-1b48fcb2-1507-433d-8e69-fc9c0e8a60aa a7bf5bd2a08b40838819ba2189f49015 8affd803090342c7a0b4a8c10fbcda95 - - default default] Security group member updated ['863be2ee-f2c8-49ce-9c0f-4d58bca5441c']#033[00m Feb 20 04:54:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e123 do_prune osdmap full prune enabled Feb 20 04:54:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e124 e124: 6 total, 6 up, 6 in Feb 20 04:54:25 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e124: 6 total, 6 up, 6 in Feb 20 04:54:26 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:26.852 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:54:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e124 do_prune osdmap full prune enabled Feb 20 04:54:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e125 e125: 6 total, 6 up, 6 in Feb 20 04:54:26 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e125: 6 total, 6 up, 6 in Feb 20 04:54:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 383 B/s wr, 18 op/s Feb 20 04:54:27 localhost nova_compute[280804]: 2026-02-20 09:54:27.568 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0. Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.689892) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49 Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581267689984, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1066, "num_deletes": 254, "total_data_size": 943188, "memory_usage": 962168, "flush_reason": "Manual Compaction"} Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581267695993, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 923135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26718, "largest_seqno": 27783, "table_properties": {"data_size": 918254, "index_size": 2416, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11360, "raw_average_key_size": 20, "raw_value_size": 908237, "raw_average_value_size": 1666, "num_data_blocks": 106, "num_entries": 545, "num_filter_entries": 545, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581202, "oldest_key_time": 1771581202, "file_creation_time": 1771581267, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}} Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 6111 microseconds, and 2713 cpu microseconds. Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.696034) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 923135 bytes OK Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.696051) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.697197) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.697210) EVENT_LOG_v1 {"time_micros": 1771581267697206, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.697229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 938136, prev total WAL file size 938136, number of live WAL files 2. Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.697889) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end) Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(901KB)], [48(17MB)] Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581267698015, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 18824919, "oldest_snapshot_seqno": -1} Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 12201 keys, 15913898 bytes, temperature: kUnknown Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581267767546, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 15913898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15845875, "index_size": 36434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30533, "raw_key_size": 328060, "raw_average_key_size": 26, "raw_value_size": 15639539, "raw_average_value_size": 1281, "num_data_blocks": 1377, "num_entries": 12201, "num_filter_entries": 12201, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771581267, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.768288) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 15913898 bytes Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.770308) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 269.1 rd, 227.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 17.1 +0.0 blob) out(15.2 +0.0 blob), read-write-amplify(37.6) write-amplify(17.2) OK, records in: 12728, records dropped: 527 output_compression: NoCompression Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.770341) EVENT_LOG_v1 {"time_micros": 1771581267770325, "job": 28, "event": "compaction_finished", "compaction_time_micros": 69952, "compaction_time_cpu_micros": 36076, "output_level": 6, "num_output_files": 1, "total_output_size": 15913898, "num_input_records": 12728, "num_output_records": 12201, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581267770858, "job": 28, "event": "table_file_deletion", "file_number": 50} Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581267774927, "job": 28, "event": "table_file_deletion", "file_number": 48} Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.697712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.775025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.775034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.775038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.775042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:54:27 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:54:27.775046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:54:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e125 do_prune osdmap full prune enabled Feb 20 04:54:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e126 e126: 6 total, 6 up, 6 in Feb 20 04:54:27 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e126: 6 total, 6 up, 6 in Feb 20 04:54:28 localhost openstack_network_exporter[243776]: ERROR 09:54:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:54:28 localhost openstack_network_exporter[243776]: Feb 20 04:54:28 localhost openstack_network_exporter[243776]: ERROR 09:54:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:54:28 localhost openstack_network_exporter[243776]: Feb 20 04:54:28 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:28.787 2 INFO neutron.agent.securitygroups_rpc [None req-3a0ac74b-e97f-4840-a2be-cf3db56b29ba eed45d0e6e9a4013a0e822ffa85bb5cb 13f7a9ed49974d1596cd7746bdf2e7c4 - - default default] Security group rule updated ['92258b95-63d5-4c8a-9734-555bdc627d97']#033[00m Feb 20 04:54:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e126 do_prune osdmap full prune enabled Feb 20 04:54:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e127 e127: 6 total, 6 up, 6 in Feb 20 04:54:28 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e127: 6 total, 6 up, 6 in Feb 20 04:54:29 localhost nova_compute[280804]: 2026-02-20 09:54:29.374 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v222: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 5.7 KiB/s wr, 63 op/s Feb 20 04:54:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e127 do_prune osdmap full prune enabled Feb 20 04:54:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e128 e128: 6 total, 6 up, 6 in Feb 20 04:54:29 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e128: 6 total, 6 up, 6 in Feb 20 04:54:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e128 do_prune osdmap full prune enabled Feb 20 04:54:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e129 e129: 6 total, 6 up, 6 in Feb 20 04:54:30 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e129: 6 total, 6 up, 6 in Feb 20 04:54:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 121 KiB/s rd, 22 KiB/s wr, 175 op/s Feb 20 04:54:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e129 do_prune osdmap full prune enabled Feb 20 04:54:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e130 e130: 6 total, 6 up, 6 in Feb 20 04:54:32 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e130: 6 total, 6 up, 6 in Feb 20 04:54:32 localhost nova_compute[280804]: 2026-02-20 09:54:32.613 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:54:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e130 do_prune osdmap full prune enabled Feb 20 04:54:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e131 e131: 6 total, 6 up, 6 in Feb 20 04:54:33 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e131: 6 total, 6 up, 6 in Feb 20 04:54:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 16 KiB/s wr, 111 op/s Feb 20 04:54:33 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:33.988 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:54:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e131 do_prune osdmap full prune enabled Feb 20 04:54:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e132 e132: 6 total, 6 up, 6 in Feb 20 04:54:34 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e132: 6 total, 6 up, 6 in Feb 20 04:54:34 localhost nova_compute[280804]: 2026-02-20 09:54:34.410 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:34 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:34.428 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:54:33Z, description=, device_id=1da85bfd-7366-4e0e-940f-74e5654dc669, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2f468d04-8ebc-4e45-a8ff-7b1e8963276f, ip_allocation=immediate, mac_address=fa:16:3e:2c:c5:cd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1790, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:54:33Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:54:34 localhost sshd[317164]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:54:34 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:54:34 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:54:34 localhost podman[317181]: 2026-02-20 09:54:34.666187998 +0000 UTC m=+0.063480936 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:54:34 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:54:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:54:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:54:34 localhost podman[317196]: 2026-02-20 09:54:34.781214838 +0000 UTC m=+0.090134612 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:54:34 localhost podman[317196]: 2026-02-20 09:54:34.785941876 +0000 UTC m=+0.094861620 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Feb 20 04:54:34 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:54:34 localhost podman[317194]: 2026-02-20 09:54:34.876417866 +0000 UTC m=+0.189773500 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z) Feb 20 04:54:34 localhost podman[317194]: 2026-02-20 09:54:34.890790812 +0000 UTC m=+0.204146386 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc.) Feb 20 04:54:34 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:54:34 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:34.916 263745 INFO neutron.agent.dhcp.agent [None req-da72c434-8396-4716-96b7-1e71095ee22a - - - - - -] DHCP configuration for ports {'2f468d04-8ebc-4e45-a8ff-7b1e8963276f'} is completed#033[00m Feb 20 04:54:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e132 do_prune osdmap full prune enabled Feb 20 04:54:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e133 e133: 6 total, 6 up, 6 in Feb 20 04:54:35 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e133: 6 total, 6 up, 6 in Feb 20 04:54:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 190 KiB/s rd, 19 KiB/s wr, 262 op/s Feb 20 04:54:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e133 do_prune osdmap full prune enabled Feb 20 04:54:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e134 e134: 6 total, 6 up, 6 in Feb 20 04:54:36 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e134: 6 total, 6 up, 6 in Feb 20 04:54:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e134 do_prune osdmap full prune enabled Feb 20 04:54:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e135 e135: 6 total, 6 up, 6 in Feb 20 04:54:37 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e135: 6 total, 6 up, 6 in Feb 20 04:54:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 190 KiB/s rd, 19 KiB/s wr, 262 op/s Feb 20 04:54:37 localhost nova_compute[280804]: 2026-02-20 09:54:37.647 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:54:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e135 do_prune osdmap full prune enabled Feb 20 04:54:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e136 e136: 6 total, 6 up, 6 in Feb 20 04:54:37 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e136: 6 total, 6 up, 6 in Feb 20 04:54:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:54:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:54:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:54:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:54:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:54:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:38 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:38 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:38 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:54:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e136 do_prune osdmap full prune enabled Feb 20 04:54:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e137 e137: 6 total, 6 up, 6 in Feb 20 04:54:38 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e137: 6 total, 6 up, 6 in Feb 20 04:54:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:54:39 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:54:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:54:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:54:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:54:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:39 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev a48dfe90-13f4-4e04-a981-dee548167289 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:54:39 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev a48dfe90-13f4-4e04-a981-dee548167289 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:54:39 localhost ceph-mgr[286565]: [progress INFO root] Completed event a48dfe90-13f4-4e04-a981-dee548167289 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:54:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:54:39 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:54:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:54:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:39 localhost nova_compute[280804]: 2026-02-20 09:54:39.455 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 145 MiB data, 782 MiB used, 41 GiB / 42 GiB avail; 145 KiB/s rd, 8.7 KiB/s wr, 188 op/s Feb 20 04:54:39 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:39.616 263745 INFO neutron.agent.linux.ip_lib [None req-57681d04-0cfb-4b06-af66-45f7105dc488 - - - - - -] Device tapb836465a-41 cannot be used as it has no MAC address#033[00m Feb 20 04:54:39 localhost nova_compute[280804]: 2026-02-20 09:54:39.642 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:39 localhost kernel: device tapb836465a-41 entered promiscuous mode Feb 20 04:54:39 localhost NetworkManager[5967]: [1771581279.6522] manager: (tapb836465a-41): new Generic device (/org/freedesktop/NetworkManager/Devices/35) Feb 20 04:54:39 localhost nova_compute[280804]: 2026-02-20 09:54:39.652 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:39 localhost ovn_controller[155916]: 2026-02-20T09:54:39Z|00163|binding|INFO|Claiming lport b836465a-4162-45f5-bbef-1739aea35f08 for this chassis. Feb 20 04:54:39 localhost ovn_controller[155916]: 2026-02-20T09:54:39Z|00164|binding|INFO|b836465a-4162-45f5-bbef-1739aea35f08: Claiming unknown Feb 20 04:54:39 localhost systemd-udevd[317393]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:54:39 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:39.667 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-453b90d0-b4fb-4558-8bf9-800079950dc0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-453b90d0-b4fb-4558-8bf9-800079950dc0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c44e13adebb4610b7c0cd2fdc62a5b7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af372beb-5642-47c6-bc23-c8610b7fd06d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b836465a-4162-45f5-bbef-1739aea35f08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:54:39 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:39.669 161766 INFO neutron.agent.ovn.metadata.agent [-] Port b836465a-4162-45f5-bbef-1739aea35f08 in datapath 453b90d0-b4fb-4558-8bf9-800079950dc0 bound to our chassis#033[00m Feb 20 04:54:39 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:39.671 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 453b90d0-b4fb-4558-8bf9-800079950dc0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:54:39 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:39.672 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc0f7ab-2584-460f-9a54-cc57e822801c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:54:39 localhost journal[229367]: ethtool ioctl error on tapb836465a-41: No such device Feb 20 04:54:39 localhost ovn_controller[155916]: 2026-02-20T09:54:39Z|00165|binding|INFO|Setting lport b836465a-4162-45f5-bbef-1739aea35f08 ovn-installed in OVS Feb 20 04:54:39 localhost ovn_controller[155916]: 2026-02-20T09:54:39Z|00166|binding|INFO|Setting lport b836465a-4162-45f5-bbef-1739aea35f08 up in Southbound Feb 20 04:54:39 localhost nova_compute[280804]: 2026-02-20 09:54:39.692 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:39 localhost journal[229367]: ethtool ioctl error on tapb836465a-41: No such device Feb 20 04:54:39 localhost journal[229367]: ethtool ioctl error on tapb836465a-41: No such device Feb 20 04:54:39 localhost journal[229367]: ethtool ioctl error on tapb836465a-41: No such device Feb 20 04:54:39 localhost journal[229367]: ethtool ioctl error on tapb836465a-41: No such device Feb 20 04:54:39 localhost journal[229367]: ethtool ioctl error on tapb836465a-41: No such device Feb 20 04:54:39 localhost journal[229367]: ethtool ioctl error on tapb836465a-41: No such device Feb 20 04:54:39 localhost journal[229367]: ethtool ioctl error on tapb836465a-41: No such device Feb 20 04:54:39 localhost nova_compute[280804]: 2026-02-20 09:54:39.727 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:39 localhost nova_compute[280804]: 2026-02-20 09:54:39.755 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:54:39 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1736566385' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:54:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:54:39 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1736566385' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:54:40 localhost podman[317464]: Feb 20 04:54:40 localhost podman[317464]: 2026-02-20 09:54:40.576298013 +0000 UTC m=+0.084179823 container create 738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-453b90d0-b4fb-4558-8bf9-800079950dc0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0) Feb 20 04:54:40 localhost systemd[1]: Started libpod-conmon-738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030.scope. Feb 20 04:54:40 localhost podman[317464]: 2026-02-20 09:54:40.535578688 +0000 UTC m=+0.043460528 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:54:40 localhost systemd[1]: Started libcrun container. Feb 20 04:54:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4994eef0c2aae39d38f26ed450df3f0c3ad6fc7021f9549af03296731dce9c69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:54:40 localhost podman[317464]: 2026-02-20 09:54:40.65471636 +0000 UTC m=+0.162598170 container init 738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-453b90d0-b4fb-4558-8bf9-800079950dc0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 20 04:54:40 localhost podman[317464]: 2026-02-20 09:54:40.664320417 +0000 UTC m=+0.172202227 container start 738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-453b90d0-b4fb-4558-8bf9-800079950dc0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:54:40 localhost dnsmasq[317483]: started, version 2.85 cachesize 150 Feb 20 04:54:40 localhost dnsmasq[317483]: DNS service limited to local subnets Feb 20 04:54:40 localhost dnsmasq[317483]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:54:40 localhost dnsmasq[317483]: warning: no upstream servers configured Feb 20 04:54:40 localhost dnsmasq-dhcp[317483]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Feb 20 04:54:40 localhost dnsmasq[317483]: read /var/lib/neutron/dhcp/453b90d0-b4fb-4558-8bf9-800079950dc0/addn_hosts - 0 addresses Feb 20 04:54:40 localhost dnsmasq-dhcp[317483]: read /var/lib/neutron/dhcp/453b90d0-b4fb-4558-8bf9-800079950dc0/host Feb 20 04:54:40 localhost dnsmasq-dhcp[317483]: read /var/lib/neutron/dhcp/453b90d0-b4fb-4558-8bf9-800079950dc0/opts Feb 20 04:54:40 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:40.826 263745 INFO neutron.agent.dhcp.agent [None req-fbbec11f-f2db-4d23-bb15-cda16e9e52c1 - - - - - -] DHCP configuration for ports {'12070e8d-15a4-494f-ac5d-d469d607e53b'} is completed#033[00m Feb 20 04:54:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 217 KiB/s rd, 18 KiB/s wr, 300 op/s Feb 20 04:54:41 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:41.599 2 INFO neutron.agent.securitygroups_rpc [None req-eb743b9b-8319-4b6c-9522-759dec99d8f5 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']#033[00m Feb 20 04:54:42 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:42.265 2 INFO neutron.agent.securitygroups_rpc [None req-b2b645bc-d759-4ba7-b30a-1010fa24d49e 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']#033[00m Feb 20 04:54:42 localhost sshd[317484]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:54:42 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:42.619 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:54:42Z, description=, device_id=112ff0d9-32cb-4021-a1c4-20c4dbfa4300, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d9114e00-0ef8-4b33-ba43-ffb1bb865b47, ip_allocation=immediate, mac_address=fa:16:3e:dc:8a:49, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:36Z, description=, dns_domain=, id=453b90d0-b4fb-4558-8bf9-800079950dc0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1656491246, port_security_enabled=True, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51054, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1806, status=ACTIVE, subnets=['f0bb4381-1a19-4384-b5cf-35f6d56902dc'], tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:54:38Z, vlan_transparent=None, network_id=453b90d0-b4fb-4558-8bf9-800079950dc0, port_security_enabled=False, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1846, status=DOWN, tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:54:42Z on network 453b90d0-b4fb-4558-8bf9-800079950dc0#033[00m Feb 20 04:54:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:54:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e137 do_prune osdmap full prune enabled Feb 20 04:54:42 localhost nova_compute[280804]: 2026-02-20 09:54:42.684 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e138 e138: 6 total, 6 up, 6 in Feb 20 04:54:42 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e138: 6 total, 6 up, 6 in Feb 20 04:54:42 localhost dnsmasq[317483]: read /var/lib/neutron/dhcp/453b90d0-b4fb-4558-8bf9-800079950dc0/addn_hosts - 1 addresses Feb 20 04:54:42 localhost dnsmasq-dhcp[317483]: read /var/lib/neutron/dhcp/453b90d0-b4fb-4558-8bf9-800079950dc0/host Feb 20 04:54:42 localhost dnsmasq-dhcp[317483]: read /var/lib/neutron/dhcp/453b90d0-b4fb-4558-8bf9-800079950dc0/opts Feb 20 04:54:42 localhost podman[317503]: 2026-02-20 09:54:42.828863416 +0000 UTC m=+0.066320823 container kill 738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-453b90d0-b4fb-4558-8bf9-800079950dc0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:54:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:54:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:54:42 localhost systemd[1]: tmp-crun.Qg0L1R.mount: Deactivated successfully. Feb 20 04:54:42 localhost podman[317518]: 2026-02-20 09:54:42.938768529 +0000 UTC m=+0.082879328 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 04:54:42 localhost podman[317518]: 2026-02-20 09:54:42.977831978 +0000 UTC m=+0.121942767 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Feb 20 04:54:42 localhost podman[317517]: 2026-02-20 09:54:42.989560123 +0000 UTC m=+0.132419718 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:54:42 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:54:43 localhost podman[317517]: 2026-02-20 09:54:43.023894457 +0000 UTC m=+0.166754032 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Feb 20 04:54:43 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:54:43 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:43.078 263745 INFO neutron.agent.dhcp.agent [None req-79a7844b-b42c-48aa-8e52-ec0a62f70699 - - - - - -] DHCP configuration for ports {'d9114e00-0ef8-4b33-ba43-ffb1bb865b47'} is completed#033[00m Feb 20 04:54:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 194 KiB/s rd, 16 KiB/s wr, 268 op/s Feb 20 04:54:43 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:54:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:54:43 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:43 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:43.854 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:54:42Z, description=, device_id=112ff0d9-32cb-4021-a1c4-20c4dbfa4300, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d9114e00-0ef8-4b33-ba43-ffb1bb865b47, ip_allocation=immediate, mac_address=fa:16:3e:dc:8a:49, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:36Z, description=, dns_domain=, id=453b90d0-b4fb-4558-8bf9-800079950dc0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1656491246, port_security_enabled=True, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51054, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1806, status=ACTIVE, subnets=['f0bb4381-1a19-4384-b5cf-35f6d56902dc'], tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:54:38Z, vlan_transparent=None, network_id=453b90d0-b4fb-4558-8bf9-800079950dc0, port_security_enabled=False, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1846, status=DOWN, tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:54:42Z on network 453b90d0-b4fb-4558-8bf9-800079950dc0#033[00m Feb 20 04:54:44 localhost dnsmasq[317483]: read /var/lib/neutron/dhcp/453b90d0-b4fb-4558-8bf9-800079950dc0/addn_hosts - 1 addresses Feb 20 04:54:44 localhost dnsmasq-dhcp[317483]: read /var/lib/neutron/dhcp/453b90d0-b4fb-4558-8bf9-800079950dc0/host Feb 20 04:54:44 localhost podman[317587]: 2026-02-20 09:54:44.038536568 +0000 UTC m=+0.057946258 container kill 738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-453b90d0-b4fb-4558-8bf9-800079950dc0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 20 04:54:44 localhost dnsmasq-dhcp[317483]: read /var/lib/neutron/dhcp/453b90d0-b4fb-4558-8bf9-800079950dc0/opts Feb 20 04:54:44 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:44.238 263745 INFO neutron.agent.dhcp.agent [None req-d5f5d590-c2f4-46bf-89f6-c336741986bb - - - - - -] DHCP configuration for ports {'d9114e00-0ef8-4b33-ba43-ffb1bb865b47'} is completed#033[00m Feb 20 04:54:44 localhost nova_compute[280804]: 2026-02-20 09:54:44.496 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:44 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:54:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 153 KiB/s rd, 13 KiB/s wr, 213 op/s Feb 20 04:54:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e138 do_prune osdmap full prune enabled Feb 20 04:54:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e139 e139: 6 total, 6 up, 6 in Feb 20 04:54:45 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e139: 6 total, 6 up, 6 in Feb 20 04:54:46 localhost podman[241347]: time="2026-02-20T09:54:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:54:46 localhost podman[241347]: @ - - [20/Feb/2026:09:54:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159534 "" "Go-http-client/1.1" Feb 20 04:54:46 localhost podman[241347]: @ - - [20/Feb/2026:09:54:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19253 "" "Go-http-client/1.1" Feb 20 04:54:46 localhost dnsmasq[317483]: read /var/lib/neutron/dhcp/453b90d0-b4fb-4558-8bf9-800079950dc0/addn_hosts - 0 addresses Feb 20 04:54:46 localhost podman[317623]: 2026-02-20 09:54:46.211327566 +0000 UTC m=+0.119715537 container kill 738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-453b90d0-b4fb-4558-8bf9-800079950dc0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 20 04:54:46 localhost dnsmasq-dhcp[317483]: read /var/lib/neutron/dhcp/453b90d0-b4fb-4558-8bf9-800079950dc0/host Feb 20 04:54:46 localhost dnsmasq-dhcp[317483]: read /var/lib/neutron/dhcp/453b90d0-b4fb-4558-8bf9-800079950dc0/opts Feb 20 04:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:54:46 localhost podman[317635]: 2026-02-20 09:54:46.32722052 +0000 UTC m=+0.089935717 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 04:54:46 localhost podman[317635]: 2026-02-20 09:54:46.343733664 +0000 UTC m=+0.106448861 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:54:46 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:54:46 localhost nova_compute[280804]: 2026-02-20 09:54:46.777 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:46 localhost ovn_controller[155916]: 2026-02-20T09:54:46Z|00167|binding|INFO|Releasing lport b836465a-4162-45f5-bbef-1739aea35f08 from this chassis (sb_readonly=0) Feb 20 04:54:46 localhost kernel: device tapb836465a-41 left promiscuous mode Feb 20 04:54:46 localhost ovn_controller[155916]: 2026-02-20T09:54:46Z|00168|binding|INFO|Setting lport b836465a-4162-45f5-bbef-1739aea35f08 down in Southbound Feb 20 04:54:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:46.788 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-453b90d0-b4fb-4558-8bf9-800079950dc0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-453b90d0-b4fb-4558-8bf9-800079950dc0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c44e13adebb4610b7c0cd2fdc62a5b7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af372beb-5642-47c6-bc23-c8610b7fd06d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b836465a-4162-45f5-bbef-1739aea35f08) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:54:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:46.790 161766 INFO neutron.agent.ovn.metadata.agent [-] Port b836465a-4162-45f5-bbef-1739aea35f08 in datapath 453b90d0-b4fb-4558-8bf9-800079950dc0 unbound from our chassis#033[00m Feb 20 04:54:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:46.792 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 453b90d0-b4fb-4558-8bf9-800079950dc0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:54:46 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:46.793 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[5fba5969-778c-4491-ad43-0475e7054e9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:54:46 localhost nova_compute[280804]: 2026-02-20 09:54:46.801 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:46 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:46.919 2 INFO neutron.agent.securitygroups_rpc [None req-bce28f21-3f23-462f-a383-948742167547 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:54:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v243: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 8.2 KiB/s wr, 113 op/s Feb 20 04:54:47 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:47.561 2 INFO neutron.agent.securitygroups_rpc [None req-4cd6b0db-c578-42d5-a167-b93bcbfe0117 d89d4d1c90df43a18f75b47e31c78f62 7365eb83b07c4401a5a58afb3f122ce5 - - default default] Security group member updated ['921ea913-835d-427d-a3f2-d35699dcd043']#033[00m Feb 20 04:54:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:54:47 localhost nova_compute[280804]: 2026-02-20 09:54:47.687 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:48 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:48.446 2 INFO neutron.agent.securitygroups_rpc [None req-8cc87b89-3410-419f-80d0-39ae3addedda f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:54:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 5.6 KiB/s rd, 1.4 KiB/s wr, 9 op/s Feb 20 04:54:49 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:49.527 2 INFO neutron.agent.securitygroups_rpc [None req-fe10c928-22e1-439f-ab59-773382d09580 d89d4d1c90df43a18f75b47e31c78f62 7365eb83b07c4401a5a58afb3f122ce5 - - default default] Security group member updated ['921ea913-835d-427d-a3f2-d35699dcd043']#033[00m Feb 20 04:54:49 localhost nova_compute[280804]: 2026-02-20 09:54:49.545 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:49 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:54:49 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3040086044' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:54:49 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:54:49 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3040086044' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:54:50 localhost podman[317684]: 2026-02-20 09:54:50.717066568 +0000 UTC m=+0.061416051 container kill 738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-453b90d0-b4fb-4558-8bf9-800079950dc0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:54:50 localhost dnsmasq[317483]: exiting on receipt of SIGTERM Feb 20 04:54:50 localhost systemd[1]: libpod-738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030.scope: Deactivated successfully. Feb 20 04:54:50 localhost podman[317698]: 2026-02-20 09:54:50.789869724 +0000 UTC m=+0.055358008 container died 738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-453b90d0-b4fb-4558-8bf9-800079950dc0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:54:50 localhost systemd[1]: tmp-crun.XLzBnw.mount: Deactivated successfully. Feb 20 04:54:50 localhost podman[317698]: 2026-02-20 09:54:50.831273997 +0000 UTC m=+0.096762221 container cleanup 738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-453b90d0-b4fb-4558-8bf9-800079950dc0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 20 04:54:50 localhost systemd[1]: libpod-conmon-738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030.scope: Deactivated successfully. Feb 20 04:54:50 localhost podman[317700]: 2026-02-20 09:54:50.87459442 +0000 UTC m=+0.131965356 container remove 738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-453b90d0-b4fb-4558-8bf9-800079950dc0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3) Feb 20 04:54:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 2.3 KiB/s wr, 54 op/s Feb 20 04:54:51 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:51.519 263745 INFO neutron.agent.dhcp.agent [None req-b816d269-bb2c-450a-8388-d6f4450cfb67 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:54:51 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:51.609 263745 INFO neutron.agent.linux.ip_lib [None req-dd56ee12-f554-4e40-93fe-e17490a1db48 - - - - - -] Device tap0de39a2f-3a cannot be used as it has no MAC address#033[00m Feb 20 04:54:51 localhost nova_compute[280804]: 2026-02-20 09:54:51.630 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:51 localhost kernel: device tap0de39a2f-3a entered promiscuous mode Feb 20 04:54:51 localhost NetworkManager[5967]: [1771581291.6382] manager: (tap0de39a2f-3a): new Generic device (/org/freedesktop/NetworkManager/Devices/36) Feb 20 04:54:51 localhost nova_compute[280804]: 2026-02-20 09:54:51.639 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:51 localhost ovn_controller[155916]: 2026-02-20T09:54:51Z|00169|binding|INFO|Claiming lport 0de39a2f-3a51-400e-a8a2-62e994858084 for this chassis. Feb 20 04:54:51 localhost ovn_controller[155916]: 2026-02-20T09:54:51Z|00170|binding|INFO|0de39a2f-3a51-400e-a8a2-62e994858084: Claiming unknown Feb 20 04:54:51 localhost systemd-udevd[317738]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:54:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:51.653 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-47e28fcd-a950-487c-a419-bd884e953d11', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47e28fcd-a950-487c-a419-bd884e953d11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54500b20d6a643669fbf357242dde27f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5af64d9a-ed7a-428f-a239-1d025f36e1fb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0de39a2f-3a51-400e-a8a2-62e994858084) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:54:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:51.655 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 0de39a2f-3a51-400e-a8a2-62e994858084 in datapath 47e28fcd-a950-487c-a419-bd884e953d11 bound to our chassis#033[00m Feb 20 04:54:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:51.656 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 47e28fcd-a950-487c-a419-bd884e953d11 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:54:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:51.657 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[6a430e73-ad7a-4389-aae3-eef1b5139166]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:54:51 localhost journal[229367]: ethtool ioctl error on tap0de39a2f-3a: No such device Feb 20 04:54:51 localhost journal[229367]: ethtool ioctl error on tap0de39a2f-3a: No such device Feb 20 04:54:51 localhost ovn_controller[155916]: 2026-02-20T09:54:51Z|00171|binding|INFO|Setting lport 0de39a2f-3a51-400e-a8a2-62e994858084 ovn-installed in OVS Feb 20 04:54:51 localhost ovn_controller[155916]: 2026-02-20T09:54:51Z|00172|binding|INFO|Setting lport 0de39a2f-3a51-400e-a8a2-62e994858084 up in Southbound Feb 20 04:54:51 localhost journal[229367]: ethtool ioctl error on tap0de39a2f-3a: No such device Feb 20 04:54:51 localhost nova_compute[280804]: 2026-02-20 09:54:51.678 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:51 localhost journal[229367]: ethtool ioctl error on tap0de39a2f-3a: No such device Feb 20 04:54:51 localhost journal[229367]: ethtool ioctl error on tap0de39a2f-3a: No such device Feb 20 04:54:51 localhost journal[229367]: ethtool ioctl error on tap0de39a2f-3a: No such device Feb 20 04:54:51 localhost journal[229367]: ethtool ioctl error on tap0de39a2f-3a: No such device Feb 20 04:54:51 localhost journal[229367]: ethtool ioctl error on tap0de39a2f-3a: No such device Feb 20 04:54:51 localhost nova_compute[280804]: 2026-02-20 09:54:51.709 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:51 localhost systemd[1]: var-lib-containers-storage-overlay-4994eef0c2aae39d38f26ed450df3f0c3ad6fc7021f9549af03296731dce9c69-merged.mount: Deactivated successfully. Feb 20 04:54:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-738cf82c572a3299be0672403306c8174a3215c007cf936e4a1350196a024030-userdata-shm.mount: Deactivated successfully. Feb 20 04:54:51 localhost systemd[1]: run-netns-qdhcp\x2d453b90d0\x2db4fb\x2d4558\x2d8bf9\x2d800079950dc0.mount: Deactivated successfully. Feb 20 04:54:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e139 do_prune osdmap full prune enabled Feb 20 04:54:51 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:51.728 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:54:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e140 e140: 6 total, 6 up, 6 in Feb 20 04:54:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e140: 6 total, 6 up, 6 in Feb 20 04:54:51 localhost nova_compute[280804]: 2026-02-20 09:54:51.794 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:52.148 2 INFO neutron.agent.securitygroups_rpc [None req-ef251e71-dc68-4712-81f3-aa7e64c344fa d89d4d1c90df43a18f75b47e31c78f62 7365eb83b07c4401a5a58afb3f122ce5 - - default default] Security group member updated ['921ea913-835d-427d-a3f2-d35699dcd043']#033[00m Feb 20 04:54:52 localhost podman[317809]: Feb 20 04:54:52 localhost podman[317809]: 2026-02-20 09:54:52.657847343 +0000 UTC m=+0.085240481 container create 1c0fe35730af3b63b1aaa772b5feb4b93debc5620f1a9c626c314065e7fd729b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-47e28fcd-a950-487c-a419-bd884e953d11, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Feb 20 04:54:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:54:52 localhost nova_compute[280804]: 2026-02-20 09:54:52.689 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:52 localhost systemd[1]: Started libpod-conmon-1c0fe35730af3b63b1aaa772b5feb4b93debc5620f1a9c626c314065e7fd729b.scope. Feb 20 04:54:52 localhost podman[317809]: 2026-02-20 09:54:52.616650046 +0000 UTC m=+0.044043204 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:54:52 localhost systemd[1]: Started libcrun container. Feb 20 04:54:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e44607525178dca85b1327cb4dff450e0b48e213f9f443a3a15d119915db26e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:54:52 localhost podman[317809]: 2026-02-20 09:54:52.74592933 +0000 UTC m=+0.173322498 container init 1c0fe35730af3b63b1aaa772b5feb4b93debc5620f1a9c626c314065e7fd729b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-47e28fcd-a950-487c-a419-bd884e953d11, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:54:52 localhost podman[317809]: 2026-02-20 09:54:52.755017273 +0000 UTC m=+0.182410411 container start 1c0fe35730af3b63b1aaa772b5feb4b93debc5620f1a9c626c314065e7fd729b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-47e28fcd-a950-487c-a419-bd884e953d11, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 20 04:54:52 localhost dnsmasq[317827]: started, version 2.85 cachesize 150 Feb 20 04:54:52 localhost dnsmasq[317827]: DNS service limited to local subnets Feb 20 04:54:52 localhost dnsmasq[317827]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:54:52 localhost dnsmasq[317827]: warning: no upstream servers configured Feb 20 04:54:52 localhost dnsmasq-dhcp[317827]: DHCPv6, static leases only on 2001:db8:0:ffff::, lease time 1d Feb 20 04:54:52 localhost dnsmasq[317827]: read /var/lib/neutron/dhcp/47e28fcd-a950-487c-a419-bd884e953d11/addn_hosts - 0 addresses Feb 20 04:54:52 localhost dnsmasq-dhcp[317827]: read /var/lib/neutron/dhcp/47e28fcd-a950-487c-a419-bd884e953d11/host Feb 20 04:54:52 localhost dnsmasq-dhcp[317827]: read /var/lib/neutron/dhcp/47e28fcd-a950-487c-a419-bd884e953d11/opts Feb 20 04:54:52 localhost nova_compute[280804]: 2026-02-20 09:54:52.782 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:52 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:52.931 2 INFO neutron.agent.securitygroups_rpc [None req-ffde2a61-a3bf-4d03-bb6c-67c1d37d15bf f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:54:52 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:54:52.952 263745 INFO neutron.agent.dhcp.agent [None req-71f09adc-708a-401d-bd4d-33de5074d363 - - - - - -] DHCP configuration for ports {'bbc3d1b0-83b7-4df4-b7ff-293aeb161bc4'} is completed#033[00m Feb 20 04:54:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:54:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:54:53 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3ad5ffb2-84e1-4f79-a933-80d3c5a63164", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:54:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3ad5ffb2-84e1-4f79-a933-80d3c5a63164, vol_name:cephfs) < "" Feb 20 04:54:53 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:53 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:53 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:53 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:53.466+0000 7f74524d4640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:53 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:53 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:53 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:53.466+0000 7f74524d4640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:53 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:53.466+0000 7f74524d4640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:53 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:53.466+0000 7f74524d4640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:53 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:53.466+0000 7f74524d4640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v247: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 1.6 KiB/s wr, 52 op/s Feb 20 04:54:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:54:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:54:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:54:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:54:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:54:53 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2344507398' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:54:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:54:53 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2344507398' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:54:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3ad5ffb2-84e1-4f79-a933-80d3c5a63164/.meta.tmp' Feb 20 04:54:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3ad5ffb2-84e1-4f79-a933-80d3c5a63164/.meta.tmp' to config b'/volumes/_nogroup/3ad5ffb2-84e1-4f79-a933-80d3c5a63164/.meta' Feb 20 04:54:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3ad5ffb2-84e1-4f79-a933-80d3c5a63164, vol_name:cephfs) < "" Feb 20 04:54:53 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3ad5ffb2-84e1-4f79-a933-80d3c5a63164", "format": "json"}]: dispatch Feb 20 04:54:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3ad5ffb2-84e1-4f79-a933-80d3c5a63164, vol_name:cephfs) < "" Feb 20 04:54:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3ad5ffb2-84e1-4f79-a933-80d3c5a63164, vol_name:cephfs) < "" Feb 20 04:54:54 localhost nova_compute[280804]: 2026-02-20 09:54:54.573 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:54 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:54.603 2 INFO neutron.agent.securitygroups_rpc [None req-3a014f90-3d7f-4958-9a67-4182f974566f 5f5cb06fd4c045b1bfc3f890392c3165 54500b20d6a643669fbf357242dde27f - - default default] Security group member updated ['b8016b60-fce7-435d-b877-28993ceda4d5']#033[00m Feb 20 04:54:54 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e47: np0005625202.arwxwo(active, since 6m), standbys: np0005625203.lonygy, np0005625204.exgrzx Feb 20 04:54:54 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:54.966 2 INFO neutron.agent.securitygroups_rpc [None req-208eb96e-53cb-409e-b54b-a8915a18b91f f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:54:55 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:55.023 2 INFO neutron.agent.securitygroups_rpc [None req-d0404fc6-f217-496c-b11d-08ac05ffdfb7 d89d4d1c90df43a18f75b47e31c78f62 7365eb83b07c4401a5a58afb3f122ce5 - - default default] Security group member updated ['921ea913-835d-427d-a3f2-d35699dcd043']#033[00m Feb 20 04:54:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:54:55 localhost podman[317843]: 2026-02-20 09:54:55.43500357 +0000 UTC m=+0.073982949 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:54:55 localhost podman[317843]: 2026-02-20 09:54:55.471807709 +0000 UTC m=+0.110787098 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:54:55 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:54:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 4.8 KiB/s wr, 71 op/s Feb 20 04:54:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3ad5ffb2-84e1-4f79-a933-80d3c5a63164", "format": "json"}]: dispatch Feb 20 04:54:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3ad5ffb2-84e1-4f79-a933-80d3c5a63164, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:54:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3ad5ffb2-84e1-4f79-a933-80d3c5a63164, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:54:56 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3ad5ffb2-84e1-4f79-a933-80d3c5a63164' of type subvolume Feb 20 04:54:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:56.291+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3ad5ffb2-84e1-4f79-a933-80d3c5a63164' of type subvolume Feb 20 04:54:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3ad5ffb2-84e1-4f79-a933-80d3c5a63164", "force": true, "format": "json"}]: dispatch Feb 20 04:54:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3ad5ffb2-84e1-4f79-a933-80d3c5a63164, vol_name:cephfs) < "" Feb 20 04:54:56 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3ad5ffb2-84e1-4f79-a933-80d3c5a63164'' moved to trashcan Feb 20 04:54:56 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:54:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3ad5ffb2-84e1-4f79-a933-80d3c5a63164, vol_name:cephfs) < "" Feb 20 04:54:56 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:56.322+0000 7f7455cdb640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:56.322+0000 7f7455cdb640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:56.322+0000 7f7455cdb640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:56.322+0000 7f7455cdb640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:56.322+0000 7f7455cdb640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:56.353+0000 7f7454cd9640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:56.353+0000 7f7454cd9640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:56.353+0000 7f7454cd9640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:56.353+0000 7f7454cd9640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:54:56.353+0000 7f7454cd9640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:54:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:56.606 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:54:56 localhost nova_compute[280804]: 2026-02-20 09:54:56.606 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:56.608 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 04:54:56 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:56.669 2 INFO neutron.agent.securitygroups_rpc [None req-36db6997-ce64-4f72-86bb-117bda3b0094 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']#033[00m Feb 20 04:54:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v249: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 4.7 KiB/s wr, 70 op/s Feb 20 04:54:57 localhost nova_compute[280804]: 2026-02-20 09:54:57.512 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:54:57 localhost sshd[317891]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:54:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:54:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e140 do_prune osdmap full prune enabled Feb 20 04:54:57 localhost nova_compute[280804]: 2026-02-20 09:54:57.738 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:54:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e141 e141: 6 total, 6 up, 6 in Feb 20 04:54:57 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e141: 6 total, 6 up, 6 in Feb 20 04:54:57 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:57.786 2 INFO neutron.agent.securitygroups_rpc [None req-a1f738f8-2f94-4ebf-b02a-59bcf9971aeb 51a4789e7d0b404b9882e0c26f7229be 1c44e13adebb4610b7c0cd2fdc62a5b7 - - default default] Security group member updated ['000c42d1-648a-4f56-b7e6-024a1e270fb9']#033[00m Feb 20 04:54:58 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:58.069 2 INFO neutron.agent.securitygroups_rpc [None req-865b7338-5884-4631-b540-e349cbd12cfd f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:54:58 localhost openstack_network_exporter[243776]: ERROR 09:54:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:54:58 localhost openstack_network_exporter[243776]: Feb 20 04:54:58 localhost openstack_network_exporter[243776]: ERROR 09:54:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:54:58 localhost openstack_network_exporter[243776]: Feb 20 04:54:58 localhost nova_compute[280804]: 2026-02-20 09:54:58.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:54:58 localhost nova_compute[280804]: 2026-02-20 09:54:58.512 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:54:58 localhost nova_compute[280804]: 2026-02-20 09:54:58.513 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:54:58 localhost nova_compute[280804]: 2026-02-20 09:54:58.533 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:54:58 localhost ovn_metadata_agent[161761]: 2026-02-20 09:54:58.609 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:54:58 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e48: np0005625202.arwxwo(active, since 6m), standbys: np0005625203.lonygy, np0005625204.exgrzx Feb 20 04:54:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 8.5 KiB/s wr, 36 op/s Feb 20 04:54:59 localhost nova_compute[280804]: 2026-02-20 09:54:59.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:54:59 localhost neutron_sriov_agent[256551]: 2026-02-20 09:54:59.624 2 INFO neutron.agent.securitygroups_rpc [None req-1d54fa0a-8062-4c9c-94e3-875dfdd78a26 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:54:59 localhost nova_compute[280804]: 2026-02-20 09:54:59.625 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:00.449 263745 INFO neutron.agent.linux.ip_lib [None req-0d00ad96-3e66-4b59-956f-6e255e4d07cc - - - - - -] Device tap3a34cbbc-1b cannot be used as it has no MAC address#033[00m Feb 20 04:55:00 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:00.459 2 INFO neutron.agent.securitygroups_rpc [None req-4a29bc3c-e3e4-4d65-b800-f3b558d9c704 5f5cb06fd4c045b1bfc3f890392c3165 54500b20d6a643669fbf357242dde27f - - default default] Security group member updated ['b8016b60-fce7-435d-b877-28993ceda4d5']#033[00m Feb 20 04:55:00 localhost nova_compute[280804]: 2026-02-20 09:55:00.476 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:00 localhost kernel: device tap3a34cbbc-1b entered promiscuous mode Feb 20 04:55:00 localhost NetworkManager[5967]: [1771581300.4860] manager: (tap3a34cbbc-1b): new Generic device (/org/freedesktop/NetworkManager/Devices/37) Feb 20 04:55:00 localhost nova_compute[280804]: 2026-02-20 09:55:00.486 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:00 localhost systemd-udevd[317904]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:55:00 localhost ovn_controller[155916]: 2026-02-20T09:55:00Z|00173|binding|INFO|Claiming lport 3a34cbbc-1b7d-420f-adec-4ad0a44c55c2 for this chassis. Feb 20 04:55:00 localhost ovn_controller[155916]: 2026-02-20T09:55:00Z|00174|binding|INFO|3a34cbbc-1b7d-420f-adec-4ad0a44c55c2: Claiming unknown Feb 20 04:55:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:00.507 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe12:d3eb/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-fd6976a5-b4b6-40ba-b00f-870b9c1f83fb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd6976a5-b4b6-40ba-b00f-870b9c1f83fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54500b20d6a643669fbf357242dde27f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e4677e1-7900-44e5-8221-6ef95344759d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3a34cbbc-1b7d-420f-adec-4ad0a44c55c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:00.509 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 3a34cbbc-1b7d-420f-adec-4ad0a44c55c2 in datapath fd6976a5-b4b6-40ba-b00f-870b9c1f83fb bound to our chassis#033[00m Feb 20 04:55:00 localhost nova_compute[280804]: 2026-02-20 09:55:00.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:55:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:00.511 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port edd42dbf-895e-41bc-91ae-9523148d4412 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:55:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:00.512 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd6976a5-b4b6-40ba-b00f-870b9c1f83fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:55:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:00.513 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[b3eab432-443e-4d2c-b082-c80eb7b4db9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:00 localhost journal[229367]: ethtool ioctl error on tap3a34cbbc-1b: No such device Feb 20 04:55:00 localhost journal[229367]: ethtool ioctl error on tap3a34cbbc-1b: No such device Feb 20 04:55:00 localhost ovn_controller[155916]: 2026-02-20T09:55:00Z|00175|binding|INFO|Setting lport 3a34cbbc-1b7d-420f-adec-4ad0a44c55c2 ovn-installed in OVS Feb 20 04:55:00 localhost ovn_controller[155916]: 2026-02-20T09:55:00Z|00176|binding|INFO|Setting lport 3a34cbbc-1b7d-420f-adec-4ad0a44c55c2 up in Southbound Feb 20 04:55:00 localhost journal[229367]: ethtool ioctl error on tap3a34cbbc-1b: No such device Feb 20 04:55:00 localhost nova_compute[280804]: 2026-02-20 09:55:00.528 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:00 localhost journal[229367]: ethtool ioctl error on tap3a34cbbc-1b: No such device Feb 20 04:55:00 localhost journal[229367]: ethtool ioctl error on tap3a34cbbc-1b: No such device Feb 20 04:55:00 localhost journal[229367]: ethtool ioctl error on tap3a34cbbc-1b: No such device Feb 20 04:55:00 localhost journal[229367]: ethtool ioctl error on tap3a34cbbc-1b: No such device Feb 20 04:55:00 localhost journal[229367]: ethtool ioctl error on tap3a34cbbc-1b: No such device Feb 20 04:55:00 localhost nova_compute[280804]: 2026-02-20 09:55:00.559 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:00 localhost nova_compute[280804]: 2026-02-20 09:55:00.588 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:01 localhost podman[317975]: Feb 20 04:55:01 localhost podman[317975]: 2026-02-20 09:55:01.405279851 +0000 UTC m=+0.089082305 container create 579c8d604fe8ce5d42a9497b23dfafe3c14c989cdddd026a6dde76c2c94eaff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd6976a5-b4b6-40ba-b00f-870b9c1f83fb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:55:01 localhost systemd[1]: Started libpod-conmon-579c8d604fe8ce5d42a9497b23dfafe3c14c989cdddd026a6dde76c2c94eaff3.scope. Feb 20 04:55:01 localhost systemd[1]: Started libcrun container. Feb 20 04:55:01 localhost podman[317975]: 2026-02-20 09:55:01.361568777 +0000 UTC m=+0.045371281 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:55:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671f10ab55df263c06aa87b3cbcd9460cc658e61bbf8d295e867060afe2d2014/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:55:01 localhost podman[317975]: 2026-02-20 09:55:01.476285648 +0000 UTC m=+0.160088092 container init 579c8d604fe8ce5d42a9497b23dfafe3c14c989cdddd026a6dde76c2c94eaff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd6976a5-b4b6-40ba-b00f-870b9c1f83fb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:55:01 localhost podman[317975]: 2026-02-20 09:55:01.484678524 +0000 UTC m=+0.168480978 container start 579c8d604fe8ce5d42a9497b23dfafe3c14c989cdddd026a6dde76c2c94eaff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd6976a5-b4b6-40ba-b00f-870b9c1f83fb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:55:01 localhost dnsmasq[317993]: started, version 2.85 cachesize 150 Feb 20 04:55:01 localhost dnsmasq[317993]: DNS service limited to local subnets Feb 20 04:55:01 localhost dnsmasq[317993]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:55:01 localhost dnsmasq[317993]: warning: no upstream servers configured Feb 20 04:55:01 localhost dnsmasq-dhcp[317993]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 20 04:55:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 7.9 KiB/s wr, 32 op/s Feb 20 04:55:01 localhost dnsmasq[317993]: read /var/lib/neutron/dhcp/fd6976a5-b4b6-40ba-b00f-870b9c1f83fb/addn_hosts - 0 addresses Feb 20 04:55:01 localhost dnsmasq-dhcp[317993]: read /var/lib/neutron/dhcp/fd6976a5-b4b6-40ba-b00f-870b9c1f83fb/host Feb 20 04:55:01 localhost dnsmasq-dhcp[317993]: read /var/lib/neutron/dhcp/fd6976a5-b4b6-40ba-b00f-870b9c1f83fb/opts Feb 20 04:55:01 localhost nova_compute[280804]: 2026-02-20 09:55:01.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:55:01 localhost nova_compute[280804]: 2026-02-20 09:55:01.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:55:01 localhost nova_compute[280804]: 2026-02-20 09:55:01.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:55:01 localhost nova_compute[280804]: 2026-02-20 09:55:01.533 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:55:01 localhost nova_compute[280804]: 2026-02-20 09:55:01.533 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:55:01 localhost nova_compute[280804]: 2026-02-20 09:55:01.534 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:55:01 localhost nova_compute[280804]: 2026-02-20 09:55:01.534 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:55:01 localhost nova_compute[280804]: 2026-02-20 09:55:01.534 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:55:01 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:01.731 263745 INFO neutron.agent.dhcp.agent [None req-c5b6222d-98f5-4fff-aba8-8a66430b0c1a - - - - - -] DHCP configuration for ports {'bb82f6f8-c436-4e71-bb12-97545a0440ce'} is completed#033[00m Feb 20 04:55:01 localhost dnsmasq[317993]: exiting on receipt of SIGTERM Feb 20 04:55:01 localhost podman[318032]: 2026-02-20 09:55:01.905119201 +0000 UTC m=+0.060781685 container kill 579c8d604fe8ce5d42a9497b23dfafe3c14c989cdddd026a6dde76c2c94eaff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd6976a5-b4b6-40ba-b00f-870b9c1f83fb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:55:01 localhost systemd[1]: libpod-579c8d604fe8ce5d42a9497b23dfafe3c14c989cdddd026a6dde76c2c94eaff3.scope: Deactivated successfully. Feb 20 04:55:01 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:55:01 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/674329819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:55:01 localhost podman[318045]: 2026-02-20 09:55:01.97245228 +0000 UTC m=+0.053547321 container died 579c8d604fe8ce5d42a9497b23dfafe3c14c989cdddd026a6dde76c2c94eaff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd6976a5-b4b6-40ba-b00f-870b9c1f83fb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:55:01 localhost nova_compute[280804]: 2026-02-20 09:55:01.981 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:55:02 localhost podman[318045]: 2026-02-20 09:55:02.007596034 +0000 UTC m=+0.088691035 container cleanup 579c8d604fe8ce5d42a9497b23dfafe3c14c989cdddd026a6dde76c2c94eaff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd6976a5-b4b6-40ba-b00f-870b9c1f83fb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Feb 20 04:55:02 localhost systemd[1]: libpod-conmon-579c8d604fe8ce5d42a9497b23dfafe3c14c989cdddd026a6dde76c2c94eaff3.scope: Deactivated successfully. Feb 20 04:55:02 localhost podman[318047]: 2026-02-20 09:55:02.054555696 +0000 UTC m=+0.127545499 container remove 579c8d604fe8ce5d42a9497b23dfafe3c14c989cdddd026a6dde76c2c94eaff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd6976a5-b4b6-40ba-b00f-870b9c1f83fb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 20 04:55:02 localhost kernel: device tap3a34cbbc-1b left promiscuous mode Feb 20 04:55:02 localhost nova_compute[280804]: 2026-02-20 09:55:02.105 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:02 localhost ovn_controller[155916]: 2026-02-20T09:55:02Z|00177|binding|INFO|Releasing lport 3a34cbbc-1b7d-420f-adec-4ad0a44c55c2 from this chassis (sb_readonly=0) Feb 20 04:55:02 localhost ovn_controller[155916]: 2026-02-20T09:55:02Z|00178|binding|INFO|Setting lport 3a34cbbc-1b7d-420f-adec-4ad0a44c55c2 down in Southbound Feb 20 04:55:02 localhost nova_compute[280804]: 2026-02-20 09:55:02.125 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:02 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:02.160 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe12:d3eb/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-fd6976a5-b4b6-40ba-b00f-870b9c1f83fb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd6976a5-b4b6-40ba-b00f-870b9c1f83fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54500b20d6a643669fbf357242dde27f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e4677e1-7900-44e5-8221-6ef95344759d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3a34cbbc-1b7d-420f-adec-4ad0a44c55c2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:02 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:02.161 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 3a34cbbc-1b7d-420f-adec-4ad0a44c55c2 in datapath fd6976a5-b4b6-40ba-b00f-870b9c1f83fb unbound from our chassis#033[00m Feb 20 04:55:02 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:02.164 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd6976a5-b4b6-40ba-b00f-870b9c1f83fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:55:02 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:02.165 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7cf58f-4ecd-4119-bdb0-9fdb3faa491d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:02 localhost nova_compute[280804]: 2026-02-20 09:55:02.250 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:55:02 localhost nova_compute[280804]: 2026-02-20 09:55:02.253 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11494MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:55:02 localhost nova_compute[280804]: 2026-02-20 09:55:02.253 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:55:02 localhost nova_compute[280804]: 2026-02-20 09:55:02.254 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:55:02 localhost nova_compute[280804]: 2026-02-20 09:55:02.318 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:55:02 localhost nova_compute[280804]: 2026-02-20 09:55:02.319 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:55:02 localhost nova_compute[280804]: 2026-02-20 09:55:02.342 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:55:02 localhost systemd[1]: var-lib-containers-storage-overlay-671f10ab55df263c06aa87b3cbcd9460cc658e61bbf8d295e867060afe2d2014-merged.mount: Deactivated successfully. Feb 20 04:55:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-579c8d604fe8ce5d42a9497b23dfafe3c14c989cdddd026a6dde76c2c94eaff3-userdata-shm.mount: Deactivated successfully. Feb 20 04:55:02 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:02.416 263745 INFO neutron.agent.dhcp.agent [None req-a60c3060-9143-416a-8145-83c9e8e88a23 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:02 localhost systemd[1]: run-netns-qdhcp\x2dfd6976a5\x2db4b6\x2d40ba\x2db00f\x2d870b9c1f83fb.mount: Deactivated successfully. Feb 20 04:55:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:55:02 localhost nova_compute[280804]: 2026-02-20 09:55:02.740 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:55:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2447234794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:55:02 localhost nova_compute[280804]: 2026-02-20 09:55:02.799 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:55:02 localhost nova_compute[280804]: 2026-02-20 09:55:02.805 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:55:03 localhost nova_compute[280804]: 2026-02-20 09:55:03.064 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:55:03 localhost nova_compute[280804]: 2026-02-20 09:55:03.067 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:55:03 localhost nova_compute[280804]: 2026-02-20 09:55:03.068 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:55:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 7.7 KiB/s wr, 31 op/s Feb 20 04:55:03 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:03.815 2 INFO neutron.agent.securitygroups_rpc [None req-06243d38-8e6a-45a5-a465-51e5b9c1322f f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:55:03 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:03.927 2 INFO neutron.agent.securitygroups_rpc [None req-befc33c8-098a-487a-a3bd-757b9f93e81d 3ace3fc0d46241ffa2d6d0b16953a588 8aa5b5a34cfe458d96fea87261361db1 - - default default] Security group member updated ['f947e5dd-708d-45fc-8f3d-3e71e4aec5b2']#033[00m Feb 20 04:55:03 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:03.956 2 INFO neutron.agent.securitygroups_rpc [None req-d0194b4e-ea8b-4209-af3b-a875851573ce 51a4789e7d0b404b9882e0c26f7229be 1c44e13adebb4610b7c0cd2fdc62a5b7 - - default default] Security group member updated ['000c42d1-648a-4f56-b7e6-024a1e270fb9']#033[00m Feb 20 04:55:04 localhost nova_compute[280804]: 2026-02-20 09:55:04.064 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:55:04 localhost sshd[318101]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:55:04 localhost nova_compute[280804]: 2026-02-20 09:55:04.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:55:04 localhost nova_compute[280804]: 2026-02-20 09:55:04.663 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:55:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:55:04 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:04.959 2 INFO neutron.agent.securitygroups_rpc [None req-a41b9e5e-24c7-4e83-9a8a-33aa24dfcce3 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']#033[00m Feb 20 04:55:05 localhost systemd[1]: tmp-crun.3OQfAI.mount: Deactivated successfully. Feb 20 04:55:05 localhost podman[318103]: 2026-02-20 09:55:05.036963478 +0000 UTC m=+0.092812266 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, release=1770267347, architecture=x86_64, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container) Feb 20 04:55:05 localhost podman[318103]: 2026-02-20 09:55:05.054752716 +0000 UTC m=+0.110601564 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, release=1770267347, config_id=openstack_network_exporter, version=9.7, vcs-type=git, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, vendor=Red Hat, Inc.) Feb 20 04:55:05 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:55:05 localhost podman[318104]: 2026-02-20 09:55:05.13232668 +0000 UTC m=+0.186331118 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute) Feb 20 04:55:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e141 do_prune osdmap full prune enabled Feb 20 04:55:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e142 e142: 6 total, 6 up, 6 in Feb 20 04:55:05 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e142: 6 total, 6 up, 6 in Feb 20 04:55:05 localhost podman[318104]: 2026-02-20 09:55:05.174811891 +0000 UTC m=+0.228816369 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3) Feb 20 04:55:05 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:55:05 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:05.262 2 INFO neutron.agent.securitygroups_rpc [None req-563cc02b-9571-4400-9563-57af3068006b f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:55:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 4.5 KiB/s rd, 7.5 KiB/s wr, 10 op/s Feb 20 04:55:05 localhost nova_compute[280804]: 2026-02-20 09:55:05.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:55:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:05.920 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:55:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:05.921 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:55:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:05.921 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:55:06 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e142 do_prune osdmap full prune enabled Feb 20 04:55:06 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e143 e143: 6 total, 6 up, 6 in Feb 20 04:55:06 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e143: 6 total, 6 up, 6 in Feb 20 04:55:07 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:07.467 2 INFO neutron.agent.securitygroups_rpc [None req-63a5189c-53e8-40d9-9654-04d02c3d7a9f 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 4.2 KiB/s rd, 3.2 KiB/s wr, 8 op/s Feb 20 04:55:07 localhost nova_compute[280804]: 2026-02-20 09:55:07.742 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:55:07 localhost sshd[318138]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:55:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e143 do_prune osdmap full prune enabled Feb 20 04:55:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e144 e144: 6 total, 6 up, 6 in Feb 20 04:55:08 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e144: 6 total, 6 up, 6 in Feb 20 04:55:08 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:08.629 2 INFO neutron.agent.securitygroups_rpc [None req-08efe737-22a8-48a9-b7b8-e1929d8369c8 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']#033[00m Feb 20 04:55:08 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:08.957 2 INFO neutron.agent.securitygroups_rpc [None req-3b1327f4-a56e-425d-b764-df7dfb03463f f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:55:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e144 do_prune osdmap full prune enabled Feb 20 04:55:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e145 e145: 6 total, 6 up, 6 in Feb 20 04:55:09 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e145: 6 total, 6 up, 6 in Feb 20 04:55:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 3.2 KiB/s rd, 236 B/s wr, 7 op/s Feb 20 04:55:09 localhost nova_compute[280804]: 2026-02-20 09:55:09.709 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:10 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:10.379 2 INFO neutron.agent.securitygroups_rpc [None req-9a8b2a58-7fed-4400-9808-58a86984ca8a 3ace3fc0d46241ffa2d6d0b16953a588 8aa5b5a34cfe458d96fea87261361db1 - - default default] Security group member updated ['f947e5dd-708d-45fc-8f3d-3e71e4aec5b2']#033[00m Feb 20 04:55:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 3.3 KiB/s wr, 57 op/s Feb 20 04:55:11 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:11.789 2 INFO neutron.agent.securitygroups_rpc [None req-9a050612-ac54-4498-bab7-ac1559fb18bf 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:11 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:11.909 2 INFO neutron.agent.securitygroups_rpc [None req-36106926-50cd-429d-aa41-30ad5718ad39 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:55:12 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:12.345 2 INFO neutron.agent.securitygroups_rpc [None req-9a050612-ac54-4498-bab7-ac1559fb18bf 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:55:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e145 do_prune osdmap full prune enabled Feb 20 04:55:12 localhost nova_compute[280804]: 2026-02-20 09:55:12.779 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e146 e146: 6 total, 6 up, 6 in Feb 20 04:55:12 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e146: 6 total, 6 up, 6 in Feb 20 04:55:13 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:13.071 2 INFO neutron.agent.securitygroups_rpc [None req-5b112f63-69ca-4cb8-af5b-6c4c1770d4b6 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:13 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:13.286 2 INFO neutron.agent.securitygroups_rpc [None req-624534fd-50bc-4fd0-a351-6a4578734382 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']#033[00m Feb 20 04:55:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:55:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:55:13 localhost systemd[1]: tmp-crun.RzkvLF.mount: Deactivated successfully. Feb 20 04:55:13 localhost podman[318140]: 2026-02-20 09:55:13.460528293 +0000 UTC m=+0.095825296 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 20 04:55:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 3.3 KiB/s wr, 57 op/s Feb 20 04:55:13 localhost podman[318141]: 2026-02-20 09:55:13.508010588 +0000 UTC m=+0.139242471 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 04:55:13 localhost podman[318141]: 2026-02-20 09:55:13.517781331 +0000 UTC m=+0.149013174 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:55:13 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:55:13 localhost podman[318140]: 2026-02-20 09:55:13.542847415 +0000 UTC m=+0.178144458 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0) Feb 20 04:55:13 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:55:14 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:14.512 2 INFO neutron.agent.securitygroups_rpc [None req-cfd67c24-e1f2-4ca4-a53e-99c158a0738a 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:14 localhost nova_compute[280804]: 2026-02-20 09:55:14.760 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v264: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 4.0 KiB/s wr, 91 op/s Feb 20 04:55:16 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:16.015 2 INFO neutron.agent.securitygroups_rpc [None req-7271509a-2575-4000-bc96-a2fd7601216d f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:55:16 localhost podman[241347]: time="2026-02-20T09:55:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:55:16 localhost podman[241347]: @ - - [20/Feb/2026:09:55:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159540 "" "Go-http-client/1.1" Feb 20 04:55:16 localhost podman[241347]: @ - - [20/Feb/2026:09:55:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19256 "" "Go-http-client/1.1" Feb 20 04:55:16 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e146 do_prune osdmap full prune enabled Feb 20 04:55:16 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e147 e147: 6 total, 6 up, 6 in Feb 20 04:55:16 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e147: 6 total, 6 up, 6 in Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:55:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:55:17 localhost podman[318185]: 2026-02-20 09:55:17.446013646 +0000 UTC m=+0.080421132 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:55:17 localhost podman[318185]: 2026-02-20 09:55:17.483965126 +0000 UTC m=+0.118372572 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:55:17 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:55:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 3.5 KiB/s wr, 79 op/s Feb 20 04:55:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:55:17 localhost nova_compute[280804]: 2026-02-20 09:55:17.814 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:17 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:17.983 2 INFO neutron.agent.securitygroups_rpc [None req-99b1b964-36a8-407d-a34f-bd7246c382f8 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:17 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:17.987 2 INFO neutron.agent.securitygroups_rpc [None req-9d512101-812d-472a-bd29-050847053b0a f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:55:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e147 do_prune osdmap full prune enabled Feb 20 04:55:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e148 e148: 6 total, 6 up, 6 in Feb 20 04:55:18 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e148: 6 total, 6 up, 6 in Feb 20 04:55:18 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:18.377 2 INFO neutron.agent.securitygroups_rpc [None req-6afebda2-88bb-41f1-8c70-a0608f1757d1 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 1.9 KiB/s wr, 54 op/s Feb 20 04:55:19 localhost nova_compute[280804]: 2026-02-20 09:55:19.761 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:20 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:20.182 263745 INFO neutron.agent.linux.ip_lib [None req-9875daee-2e85-438e-a2bf-b4e526326a03 - - - - - -] Device tapf2d15624-dc cannot be used as it has no MAC address#033[00m Feb 20 04:55:20 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:20.193 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:19Z, description=, device_id=7db32386-15c5-419a-be79-301a246ee8ee, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ef7827bb-09c2-4481-bb3e-b188d0b2b90a, ip_allocation=immediate, mac_address=fa:16:3e:ad:2d:ce, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2018, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:55:19Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:55:20 localhost nova_compute[280804]: 2026-02-20 09:55:20.206 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:20 localhost kernel: device tapf2d15624-dc entered promiscuous mode Feb 20 04:55:20 localhost NetworkManager[5967]: [1771581320.2160] manager: (tapf2d15624-dc): new Generic device (/org/freedesktop/NetworkManager/Devices/38) Feb 20 04:55:20 localhost nova_compute[280804]: 2026-02-20 09:55:20.216 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:20 localhost ovn_controller[155916]: 2026-02-20T09:55:20Z|00179|binding|INFO|Claiming lport f2d15624-dcf7-46c8-bbd2-fba03dee39e4 for this chassis. Feb 20 04:55:20 localhost ovn_controller[155916]: 2026-02-20T09:55:20Z|00180|binding|INFO|f2d15624-dcf7-46c8-bbd2-fba03dee39e4: Claiming unknown Feb 20 04:55:20 localhost systemd-udevd[318218]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:55:20 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:20.231 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fed1:8d6b/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-824ff542-5b14-408d-a3d6-7fb0782c41d4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-824ff542-5b14-408d-a3d6-7fb0782c41d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54500b20d6a643669fbf357242dde27f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=403acff4-c975-4f62-b970-8c4a1688669b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f2d15624-dcf7-46c8-bbd2-fba03dee39e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:20 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:20.233 161766 INFO neutron.agent.ovn.metadata.agent [-] Port f2d15624-dcf7-46c8-bbd2-fba03dee39e4 in datapath 824ff542-5b14-408d-a3d6-7fb0782c41d4 bound to our chassis#033[00m Feb 20 04:55:20 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:20.234 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 824ff542-5b14-408d-a3d6-7fb0782c41d4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:55:20 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:20.237 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[b03566b4-36a9-403d-a402-cf555cf6351b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:20 localhost journal[229367]: ethtool ioctl error on tapf2d15624-dc: No such device Feb 20 04:55:20 localhost journal[229367]: ethtool ioctl error on tapf2d15624-dc: No such device Feb 20 04:55:20 localhost ovn_controller[155916]: 2026-02-20T09:55:20Z|00181|binding|INFO|Setting lport f2d15624-dcf7-46c8-bbd2-fba03dee39e4 ovn-installed in OVS Feb 20 04:55:20 localhost ovn_controller[155916]: 2026-02-20T09:55:20Z|00182|binding|INFO|Setting lport f2d15624-dcf7-46c8-bbd2-fba03dee39e4 up in Southbound Feb 20 04:55:20 localhost nova_compute[280804]: 2026-02-20 09:55:20.257 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:20 localhost journal[229367]: ethtool ioctl error on tapf2d15624-dc: No such device Feb 20 04:55:20 localhost journal[229367]: ethtool ioctl error on tapf2d15624-dc: No such device Feb 20 04:55:20 localhost journal[229367]: ethtool ioctl error on tapf2d15624-dc: No such device Feb 20 04:55:20 localhost journal[229367]: ethtool ioctl error on tapf2d15624-dc: No such device Feb 20 04:55:20 localhost journal[229367]: ethtool ioctl error on tapf2d15624-dc: No such device Feb 20 04:55:20 localhost journal[229367]: ethtool ioctl error on tapf2d15624-dc: No such device Feb 20 04:55:20 localhost nova_compute[280804]: 2026-02-20 09:55:20.297 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:20 localhost nova_compute[280804]: 2026-02-20 09:55:20.326 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:20 localhost podman[318261]: 2026-02-20 09:55:20.425056438 +0000 UTC m=+0.059295885 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:55:20 localhost systemd[1]: tmp-crun.tqV0N7.mount: Deactivated successfully. Feb 20 04:55:20 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:55:20 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:55:20 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:55:20 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:20.730 263745 INFO neutron.agent.dhcp.agent [None req-5c80adb5-c098-4bda-9fcb-b9e60860992e - - - - - -] DHCP configuration for ports {'ef7827bb-09c2-4481-bb3e-b188d0b2b90a'} is completed#033[00m Feb 20 04:55:21 localhost podman[318324]: Feb 20 04:55:21 localhost podman[318324]: 2026-02-20 09:55:21.056238536 +0000 UTC m=+0.087063590 container create 6f0ed7c648f44e8e3ce1447f514344598a9fdff6b463f31af304e8ceb8ade7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-824ff542-5b14-408d-a3d6-7fb0782c41d4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 20 04:55:21 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:21.056 2 INFO neutron.agent.securitygroups_rpc [None req-0c0c6410-fbc5-4b85-ab45-3c003033a966 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:55:21 localhost systemd[1]: Started libpod-conmon-6f0ed7c648f44e8e3ce1447f514344598a9fdff6b463f31af304e8ceb8ade7ce.scope. Feb 20 04:55:21 localhost podman[318324]: 2026-02-20 09:55:21.014156466 +0000 UTC m=+0.044981510 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:55:21 localhost systemd[1]: Started libcrun container. Feb 20 04:55:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caa45097dca84523c815e123e69dfb4c5a505fd4ac0f33a1bf8629f974a59848/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:55:21 localhost podman[318324]: 2026-02-20 09:55:21.134556191 +0000 UTC m=+0.165381255 container init 6f0ed7c648f44e8e3ce1447f514344598a9fdff6b463f31af304e8ceb8ade7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-824ff542-5b14-408d-a3d6-7fb0782c41d4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:55:21 localhost systemd[1]: tmp-crun.xXfB7m.mount: Deactivated successfully. Feb 20 04:55:21 localhost podman[318324]: 2026-02-20 09:55:21.144813576 +0000 UTC m=+0.175638630 container start 6f0ed7c648f44e8e3ce1447f514344598a9fdff6b463f31af304e8ceb8ade7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-824ff542-5b14-408d-a3d6-7fb0782c41d4, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 20 04:55:21 localhost dnsmasq[318342]: started, version 2.85 cachesize 150 Feb 20 04:55:21 localhost dnsmasq[318342]: DNS service limited to local subnets Feb 20 04:55:21 localhost dnsmasq[318342]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:55:21 localhost dnsmasq[318342]: warning: no upstream servers configured Feb 20 04:55:21 localhost dnsmasq-dhcp[318342]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 20 04:55:21 localhost dnsmasq[318342]: read /var/lib/neutron/dhcp/824ff542-5b14-408d-a3d6-7fb0782c41d4/addn_hosts - 0 addresses Feb 20 04:55:21 localhost dnsmasq-dhcp[318342]: read /var/lib/neutron/dhcp/824ff542-5b14-408d-a3d6-7fb0782c41d4/host Feb 20 04:55:21 localhost dnsmasq-dhcp[318342]: read /var/lib/neutron/dhcp/824ff542-5b14-408d-a3d6-7fb0782c41d4/opts Feb 20 04:55:21 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:21.251 263745 INFO neutron.agent.dhcp.agent [None req-24a0c861-4db9-46a2-a449-c090d2a4ee46 - - - - - -] DHCP configuration for ports {'4313cdba-2279-4159-a10f-ed0a5ed13e62'} is completed#033[00m Feb 20 04:55:21 localhost dnsmasq[318342]: exiting on receipt of SIGTERM Feb 20 04:55:21 localhost podman[318360]: 2026-02-20 09:55:21.449571294 +0000 UTC m=+0.060752042 container kill 6f0ed7c648f44e8e3ce1447f514344598a9fdff6b463f31af304e8ceb8ade7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-824ff542-5b14-408d-a3d6-7fb0782c41d4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:55:21 localhost systemd[1]: libpod-6f0ed7c648f44e8e3ce1447f514344598a9fdff6b463f31af304e8ceb8ade7ce.scope: Deactivated successfully. Feb 20 04:55:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 3.2 KiB/s wr, 84 op/s Feb 20 04:55:21 localhost podman[318372]: 2026-02-20 09:55:21.521514318 +0000 UTC m=+0.057461186 container died 6f0ed7c648f44e8e3ce1447f514344598a9fdff6b463f31af304e8ceb8ade7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-824ff542-5b14-408d-a3d6-7fb0782c41d4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:55:21 localhost podman[318372]: 2026-02-20 09:55:21.553074215 +0000 UTC m=+0.089021033 container cleanup 6f0ed7c648f44e8e3ce1447f514344598a9fdff6b463f31af304e8ceb8ade7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-824ff542-5b14-408d-a3d6-7fb0782c41d4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:55:21 localhost systemd[1]: libpod-conmon-6f0ed7c648f44e8e3ce1447f514344598a9fdff6b463f31af304e8ceb8ade7ce.scope: Deactivated successfully. Feb 20 04:55:21 localhost podman[318374]: 2026-02-20 09:55:21.59866635 +0000 UTC m=+0.127120516 container remove 6f0ed7c648f44e8e3ce1447f514344598a9fdff6b463f31af304e8ceb8ade7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-824ff542-5b14-408d-a3d6-7fb0782c41d4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:55:21 localhost ovn_controller[155916]: 2026-02-20T09:55:21Z|00183|binding|INFO|Releasing lport f2d15624-dcf7-46c8-bbd2-fba03dee39e4 from this chassis (sb_readonly=0) Feb 20 04:55:21 localhost nova_compute[280804]: 2026-02-20 09:55:21.611 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:21 localhost kernel: device tapf2d15624-dc left promiscuous mode Feb 20 04:55:21 localhost ovn_controller[155916]: 2026-02-20T09:55:21Z|00184|binding|INFO|Setting lport f2d15624-dcf7-46c8-bbd2-fba03dee39e4 down in Southbound Feb 20 04:55:21 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:21.623 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-824ff542-5b14-408d-a3d6-7fb0782c41d4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-824ff542-5b14-408d-a3d6-7fb0782c41d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54500b20d6a643669fbf357242dde27f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=403acff4-c975-4f62-b970-8c4a1688669b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f2d15624-dcf7-46c8-bbd2-fba03dee39e4) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:21 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:21.625 161766 INFO neutron.agent.ovn.metadata.agent [-] Port f2d15624-dcf7-46c8-bbd2-fba03dee39e4 in datapath 824ff542-5b14-408d-a3d6-7fb0782c41d4 unbound from our chassis#033[00m Feb 20 04:55:21 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:21.626 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 824ff542-5b14-408d-a3d6-7fb0782c41d4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:55:21 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:21.627 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd862b5-b330-4438-830b-d1cbbd02f4da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:21 localhost nova_compute[280804]: 2026-02-20 09:55:21.644 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:21 localhost nova_compute[280804]: 2026-02-20 09:55:21.645 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:21 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:21.916 263745 INFO neutron.agent.dhcp.agent [None req-259e8dc1-0f83-474f-b787-a5ade35541da - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:21 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:21.917 263745 INFO neutron.agent.dhcp.agent [None req-259e8dc1-0f83-474f-b787-a5ade35541da - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:21 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:21.953 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:22 localhost systemd[1]: var-lib-containers-storage-overlay-caa45097dca84523c815e123e69dfb4c5a505fd4ac0f33a1bf8629f974a59848-merged.mount: Deactivated successfully. Feb 20 04:55:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f0ed7c648f44e8e3ce1447f514344598a9fdff6b463f31af304e8ceb8ade7ce-userdata-shm.mount: Deactivated successfully. Feb 20 04:55:22 localhost systemd[1]: run-netns-qdhcp\x2d824ff542\x2d5b14\x2d408d\x2da3d6\x2d7fb0782c41d4.mount: Deactivated successfully. Feb 20 04:55:22 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:22.117 263745 INFO neutron.agent.linux.ip_lib [None req-e3226985-deaf-48dc-97c0-af64c4c4fe52 - - - - - -] Device tap1d4a898d-87 cannot be used as it has no MAC address#033[00m Feb 20 04:55:22 localhost nova_compute[280804]: 2026-02-20 09:55:22.139 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:22 localhost kernel: device tap1d4a898d-87 entered promiscuous mode Feb 20 04:55:22 localhost NetworkManager[5967]: [1771581322.1482] manager: (tap1d4a898d-87): new Generic device (/org/freedesktop/NetworkManager/Devices/39) Feb 20 04:55:22 localhost nova_compute[280804]: 2026-02-20 09:55:22.150 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:22 localhost ovn_controller[155916]: 2026-02-20T09:55:22Z|00185|binding|INFO|Claiming lport 1d4a898d-8737-4915-a3c5-855f28254c0f for this chassis. Feb 20 04:55:22 localhost ovn_controller[155916]: 2026-02-20T09:55:22Z|00186|binding|INFO|1d4a898d-8737-4915-a3c5-855f28254c0f: Claiming unknown Feb 20 04:55:22 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:22.170 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-b1c2f50f-386f-4271-9374-ba939b86c805', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1c2f50f-386f-4271-9374-ba939b86c805', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9f8945a2560410b988e395a1db7710f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1699e453-3dcd-44db-bfed-b7029758b76f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1d4a898d-8737-4915-a3c5-855f28254c0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:22 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:22.173 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 1d4a898d-8737-4915-a3c5-855f28254c0f in datapath b1c2f50f-386f-4271-9374-ba939b86c805 bound to our chassis#033[00m Feb 20 04:55:22 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:22.175 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b1c2f50f-386f-4271-9374-ba939b86c805 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:55:22 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:22.176 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[4dfd430a-71fa-4e6c-94c4-11ec4c7b1fe9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:22 localhost ovn_controller[155916]: 2026-02-20T09:55:22Z|00187|binding|INFO|Setting lport 1d4a898d-8737-4915-a3c5-855f28254c0f ovn-installed in OVS Feb 20 04:55:22 localhost ovn_controller[155916]: 2026-02-20T09:55:22Z|00188|binding|INFO|Setting lport 1d4a898d-8737-4915-a3c5-855f28254c0f up in Southbound Feb 20 04:55:22 localhost nova_compute[280804]: 2026-02-20 09:55:22.185 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:22 localhost nova_compute[280804]: 2026-02-20 09:55:22.226 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:22 localhost nova_compute[280804]: 2026-02-20 09:55:22.279 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:22 localhost nova_compute[280804]: 2026-02-20 09:55:22.282 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:55:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e148 do_prune osdmap full prune enabled Feb 20 04:55:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e149 e149: 6 total, 6 up, 6 in Feb 20 04:55:22 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e149: 6 total, 6 up, 6 in Feb 20 04:55:22 localhost nova_compute[280804]: 2026-02-20 09:55:22.848 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:23 localhost podman[318463]: Feb 20 04:55:23 localhost podman[318463]: 2026-02-20 09:55:23.118261699 +0000 UTC m=+0.093297998 container create ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1c2f50f-386f-4271-9374-ba939b86c805, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:55:23 localhost systemd[1]: Started libpod-conmon-ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4.scope. Feb 20 04:55:23 localhost podman[318463]: 2026-02-20 09:55:23.076289351 +0000 UTC m=+0.051325670 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:55:23 localhost systemd[1]: Started libcrun container. Feb 20 04:55:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2111e911fb9c56bc583929b001f2f586b1d92dc0d4b50c3e357656b43f1dc06/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:55:23 localhost podman[318463]: 2026-02-20 09:55:23.192965666 +0000 UTC m=+0.168001955 container init ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1c2f50f-386f-4271-9374-ba939b86c805, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3) Feb 20 04:55:23 localhost podman[318463]: 2026-02-20 09:55:23.200989681 +0000 UTC m=+0.176025980 container start ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1c2f50f-386f-4271-9374-ba939b86c805, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:55:23 localhost dnsmasq[318481]: started, version 2.85 cachesize 150 Feb 20 04:55:23 localhost dnsmasq[318481]: DNS service limited to local subnets Feb 20 04:55:23 localhost dnsmasq[318481]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:55:23 localhost dnsmasq[318481]: warning: no upstream servers configured Feb 20 04:55:23 localhost dnsmasq-dhcp[318481]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:55:23 localhost dnsmasq[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/addn_hosts - 0 addresses Feb 20 04:55:23 localhost dnsmasq-dhcp[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/host Feb 20 04:55:23 localhost dnsmasq-dhcp[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/opts Feb 20 04:55:23 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:23.304 263745 INFO neutron.agent.dhcp.agent [None req-28320e47-c882-41bd-baef-f466f5aa1904 - - - - - -] DHCP configuration for ports {'f476502e-076d-49aa-b906-701df53b3abb'} is completed#033[00m Feb 20 04:55:23 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:23.363 2 INFO neutron.agent.securitygroups_rpc [None req-6678ed04-c4d3-4555-9813-927645c955fd 5f5cb06fd4c045b1bfc3f890392c3165 54500b20d6a643669fbf357242dde27f - - default default] Security group member updated ['b8016b60-fce7-435d-b877-28993ceda4d5']#033[00m Feb 20 04:55:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:55:23 Feb 20 04:55:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:55:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 04:55:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['vms', 'images', 'volumes', '.mgr', 'manila_metadata', 'manila_data', 'backups'] Feb 20 04:55:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 04:55:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:55:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:55:23 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:23.462 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 2.4 KiB/s wr, 49 op/s Feb 20 04:55:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:55:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:55:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:55:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:55:23 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:23.545 2 INFO neutron.agent.securitygroups_rpc [None req-08ae8aba-2609-40c5-899b-086a86995061 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.5438418946584776e-07 of space, bias 1.0, pg target 9.072537649668094e-05 quantized to 32 (current 32) Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:55:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 6.815762841987716e-06 of space, bias 4.0, pg target 0.005425347222222221 quantized to 16 (current 16) Feb 20 04:55:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:55:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:55:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:55:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:55:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:55:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:55:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:55:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:55:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:55:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:55:24 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:24.291 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:24 localhost nova_compute[280804]: 2026-02-20 09:55:24.796 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:24 localhost systemd[1]: tmp-crun.3lFZVL.mount: Deactivated successfully. Feb 20 04:55:24 localhost podman[318499]: 2026-02-20 09:55:24.918975321 +0000 UTC m=+0.066185430 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:55:24 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:55:24 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:55:24 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:55:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 45 op/s Feb 20 04:55:26 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:26.160 2 INFO neutron.agent.securitygroups_rpc [None req-65f746b2-e25c-42a6-a0af-3bc4d3abfc01 5f5cb06fd4c045b1bfc3f890392c3165 54500b20d6a643669fbf357242dde27f - - default default] Security group member updated ['b8016b60-fce7-435d-b877-28993ceda4d5']#033[00m Feb 20 04:55:26 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:26.161 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:24Z, description=, device_id=9b5f24e5-e9ff-4a93-ab1c-6e8d3736b724, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d3d5067d-0621-4d2c-b957-d3cc6e186e1e, ip_allocation=immediate, mac_address=fa:16:3e:a2:b5:4d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2050, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:55:25Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:55:26 localhost sshd[318519]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:55:26 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:55:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:26 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta.tmp' Feb 20 04:55:26 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta.tmp' to config b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta' Feb 20 04:55:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:26 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "format": "json"}]: dispatch Feb 20 04:55:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:55:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:26 localhost systemd[1]: tmp-crun.SUfiP3.mount: Deactivated successfully. Feb 20 04:55:26 localhost podman[318539]: 2026-02-20 09:55:26.38088024 +0000 UTC m=+0.064407372 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Feb 20 04:55:26 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:55:26 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:55:26 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:55:26 localhost systemd[1]: tmp-crun.24ZpRY.mount: Deactivated successfully. Feb 20 04:55:26 localhost podman[318551]: 2026-02-20 09:55:26.438412365 +0000 UTC m=+0.071502862 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:55:26 localhost podman[318551]: 2026-02-20 09:55:26.473868608 +0000 UTC m=+0.106959165 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:55:26 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:55:26 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:26.578 263745 INFO neutron.agent.dhcp.agent [None req-c630d85f-273f-43de-842c-8f6ec6f7991c - - - - - -] DHCP configuration for ports {'d3d5067d-0621-4d2c-b957-d3cc6e186e1e'} is completed#033[00m Feb 20 04:55:27 localhost ovn_controller[155916]: 2026-02-20T09:55:27Z|00189|binding|INFO|Removing iface tap0de39a2f-3a ovn-installed in OVS Feb 20 04:55:27 localhost ovn_controller[155916]: 2026-02-20T09:55:27Z|00190|binding|INFO|Removing lport 0de39a2f-3a51-400e-a8a2-62e994858084 ovn-installed in OVS Feb 20 04:55:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:27.199 161766 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port daf9f26d-37e9-4241-ad54-5ce68dc666bb with type ""#033[00m Feb 20 04:55:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:27.201 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-47e28fcd-a950-487c-a419-bd884e953d11', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47e28fcd-a950-487c-a419-bd884e953d11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54500b20d6a643669fbf357242dde27f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5af64d9a-ed7a-428f-a239-1d025f36e1fb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0de39a2f-3a51-400e-a8a2-62e994858084) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:27.204 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 0de39a2f-3a51-400e-a8a2-62e994858084 in datapath 47e28fcd-a950-487c-a419-bd884e953d11 unbound from our chassis#033[00m Feb 20 04:55:27 localhost dnsmasq[317827]: exiting on receipt of SIGTERM Feb 20 04:55:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:27.205 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 47e28fcd-a950-487c-a419-bd884e953d11 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:55:27 localhost nova_compute[280804]: 2026-02-20 09:55:27.205 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:27 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:27.206 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[38e22ca5-4afc-40b7-867c-fa6b62452f25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:27 localhost systemd[1]: libpod-1c0fe35730af3b63b1aaa772b5feb4b93debc5620f1a9c626c314065e7fd729b.scope: Deactivated successfully. Feb 20 04:55:27 localhost podman[318600]: 2026-02-20 09:55:27.206509322 +0000 UTC m=+0.056947401 container kill 1c0fe35730af3b63b1aaa772b5feb4b93debc5620f1a9c626c314065e7fd729b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-47e28fcd-a950-487c-a419-bd884e953d11, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127) Feb 20 04:55:27 localhost nova_compute[280804]: 2026-02-20 09:55:27.209 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:27 localhost podman[318612]: 2026-02-20 09:55:27.280729697 +0000 UTC m=+0.057897227 container died 1c0fe35730af3b63b1aaa772b5feb4b93debc5620f1a9c626c314065e7fd729b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-47e28fcd-a950-487c-a419-bd884e953d11, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:55:27 localhost podman[318612]: 2026-02-20 09:55:27.305741929 +0000 UTC m=+0.082909389 container cleanup 1c0fe35730af3b63b1aaa772b5feb4b93debc5620f1a9c626c314065e7fd729b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-47e28fcd-a950-487c-a419-bd884e953d11, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:55:27 localhost systemd[1]: libpod-conmon-1c0fe35730af3b63b1aaa772b5feb4b93debc5620f1a9c626c314065e7fd729b.scope: Deactivated successfully. Feb 20 04:55:27 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:27.329 2 INFO neutron.agent.securitygroups_rpc [None req-09c13c9e-9ba3-4904-bcac-09b2f5e1651f 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:27 localhost podman[318614]: 2026-02-20 09:55:27.352817734 +0000 UTC m=+0.123681985 container remove 1c0fe35730af3b63b1aaa772b5feb4b93debc5620f1a9c626c314065e7fd729b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-47e28fcd-a950-487c-a419-bd884e953d11, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 04:55:27 localhost nova_compute[280804]: 2026-02-20 09:55:27.364 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:27 localhost kernel: device tap0de39a2f-3a left promiscuous mode Feb 20 04:55:27 localhost systemd[1]: var-lib-containers-storage-overlay-7e44607525178dca85b1327cb4dff450e0b48e213f9f443a3a15d119915db26e-merged.mount: Deactivated successfully. Feb 20 04:55:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c0fe35730af3b63b1aaa772b5feb4b93debc5620f1a9c626c314065e7fd729b-userdata-shm.mount: Deactivated successfully. Feb 20 04:55:27 localhost nova_compute[280804]: 2026-02-20 09:55:27.377 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:27 localhost systemd[1]: run-netns-qdhcp\x2d47e28fcd\x2da950\x2d487c\x2da419\x2dbd884e953d11.mount: Deactivated successfully. Feb 20 04:55:27 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:27.395 263745 INFO neutron.agent.dhcp.agent [None req-26240cb2-c1ab-4759-a997-493946a4548b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 1.9 KiB/s wr, 39 op/s Feb 20 04:55:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:55:27 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:27.809 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:27 localhost nova_compute[280804]: 2026-02-20 09:55:27.886 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:28 localhost openstack_network_exporter[243776]: ERROR 09:55:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:55:28 localhost openstack_network_exporter[243776]: Feb 20 04:55:28 localhost openstack_network_exporter[243776]: ERROR 09:55:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:55:28 localhost openstack_network_exporter[243776]: Feb 20 04:55:28 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:28.219 2 INFO neutron.agent.securitygroups_rpc [None req-df2a63c6-7355-4f99-a5c3-49ea7a77359b 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:28 localhost nova_compute[280804]: 2026-02-20 09:55:28.252 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:28 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:28.438 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:28Z, description=, device_id=9b5f24e5-e9ff-4a93-ab1c-6e8d3736b724, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f83f4401-d8d5-4c78-9833-69203ab238c9, ip_allocation=immediate, mac_address=fa:16:3e:2f:f5:2b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:19Z, description=, dns_domain=, id=b1c2f50f-386f-4271-9374-ba939b86c805, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1742767130, port_security_enabled=True, project_id=b9f8945a2560410b988e395a1db7710f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=65464, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2014, status=ACTIVE, subnets=['da2e6292-614d-4b33-bf33-1f0e79cadaaa'], tags=[], tenant_id=b9f8945a2560410b988e395a1db7710f, updated_at=2026-02-20T09:55:20Z, vlan_transparent=None, network_id=b1c2f50f-386f-4271-9374-ba939b86c805, port_security_enabled=False, project_id=b9f8945a2560410b988e395a1db7710f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2063, status=DOWN, tags=[], tenant_id=b9f8945a2560410b988e395a1db7710f, updated_at=2026-02-20T09:55:28Z on network b1c2f50f-386f-4271-9374-ba939b86c805#033[00m Feb 20 04:55:28 localhost dnsmasq[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/addn_hosts - 1 addresses Feb 20 04:55:28 localhost dnsmasq-dhcp[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/host Feb 20 04:55:28 localhost podman[318661]: 2026-02-20 09:55:28.859898037 +0000 UTC m=+0.063467417 container kill ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1c2f50f-386f-4271-9374-ba939b86c805, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:55:28 localhost dnsmasq-dhcp[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/opts Feb 20 04:55:29 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:29.137 263745 INFO neutron.agent.dhcp.agent [None req-cf8acdc2-28d3-4406-a3ee-3f4ecad01e8f - - - - - -] DHCP configuration for ports {'f83f4401-d8d5-4c78-9833-69203ab238c9'} is completed#033[00m Feb 20 04:55:29 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:29.153 2 INFO neutron.agent.securitygroups_rpc [None req-abf59bbc-b23d-40a5-812c-25dd2e2a84ba 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 4.0 KiB/s wr, 33 op/s Feb 20 04:55:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "snap_name": "76272f77-26a4-41f3-97ff-7d9d42de32a4", "format": "json"}]: dispatch Feb 20 04:55:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:76272f77-26a4-41f3-97ff-7d9d42de32a4, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:76272f77-26a4-41f3-97ff-7d9d42de32a4, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:29 localhost nova_compute[280804]: 2026-02-20 09:55:29.839 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:30 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:30.329 2 INFO neutron.agent.securitygroups_rpc [None req-c4977344-b1fb-45a9-a725-767a5df232d2 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:30 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:30.764 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:30Z, description=, device_id=1ab8134a-bddf-428c-9f0b-604112325b1f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=93b0d396-8eac-41ab-9e0b-580e9d2ed47c, ip_allocation=immediate, mac_address=fa:16:3e:e8:79:5f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2073, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:55:30Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:55:30 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:30.876 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:28Z, description=, device_id=9b5f24e5-e9ff-4a93-ab1c-6e8d3736b724, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f83f4401-d8d5-4c78-9833-69203ab238c9, ip_allocation=immediate, mac_address=fa:16:3e:2f:f5:2b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:19Z, description=, dns_domain=, id=b1c2f50f-386f-4271-9374-ba939b86c805, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1742767130, port_security_enabled=True, project_id=b9f8945a2560410b988e395a1db7710f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=65464, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2014, status=ACTIVE, subnets=['da2e6292-614d-4b33-bf33-1f0e79cadaaa'], tags=[], tenant_id=b9f8945a2560410b988e395a1db7710f, updated_at=2026-02-20T09:55:20Z, vlan_transparent=None, network_id=b1c2f50f-386f-4271-9374-ba939b86c805, port_security_enabled=False, project_id=b9f8945a2560410b988e395a1db7710f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2063, status=DOWN, tags=[], tenant_id=b9f8945a2560410b988e395a1db7710f, updated_at=2026-02-20T09:55:28Z on network b1c2f50f-386f-4271-9374-ba939b86c805#033[00m Feb 20 04:55:30 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 4 addresses Feb 20 04:55:30 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:55:30 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:55:30 localhost podman[318700]: 2026-02-20 09:55:30.968933473 +0000 UTC m=+0.063583339 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 20 04:55:31 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:31.054 2 INFO neutron.agent.securitygroups_rpc [None req-3c26b833-1dd4-4db0-a9c6-673823ae81db 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:31 localhost dnsmasq[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/addn_hosts - 1 addresses Feb 20 04:55:31 localhost dnsmasq-dhcp[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/host Feb 20 04:55:31 localhost podman[318733]: 2026-02-20 09:55:31.111891564 +0000 UTC m=+0.063036785 container kill ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1c2f50f-386f-4271-9374-ba939b86c805, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 20 04:55:31 localhost dnsmasq-dhcp[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/opts Feb 20 04:55:31 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:31.204 263745 INFO neutron.agent.dhcp.agent [None req-856cdfd5-e612-4ce7-bbfd-22ceedf5b3b8 - - - - - -] DHCP configuration for ports {'93b0d396-8eac-41ab-9e0b-580e9d2ed47c'} is completed#033[00m Feb 20 04:55:31 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:31.364 263745 INFO neutron.agent.dhcp.agent [None req-e9113de8-130f-4714-a805-f782fa97d94b - - - - - -] DHCP configuration for ports {'f83f4401-d8d5-4c78-9833-69203ab238c9'} is completed#033[00m Feb 20 04:55:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 2.7 KiB/s wr, 1 op/s Feb 20 04:55:32 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:32.591 2 INFO neutron.agent.securitygroups_rpc [None req-2e3d6688-1793-41b9-a2f2-f815ee6132fe 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']#033[00m Feb 20 04:55:32 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:32.628 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:32Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3763b70e-2a08-4fd7-a31d-f5e2d841bd45, ip_allocation=immediate, mac_address=fa:16:3e:b3:45:be, name=tempest-FloatingIPTestJSON-682456783, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:19Z, description=, dns_domain=, id=b1c2f50f-386f-4271-9374-ba939b86c805, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1742767130, port_security_enabled=True, project_id=b9f8945a2560410b988e395a1db7710f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=65464, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2014, status=ACTIVE, subnets=['da2e6292-614d-4b33-bf33-1f0e79cadaaa'], tags=[], tenant_id=b9f8945a2560410b988e395a1db7710f, updated_at=2026-02-20T09:55:20Z, vlan_transparent=None, network_id=b1c2f50f-386f-4271-9374-ba939b86c805, port_security_enabled=True, project_id=b9f8945a2560410b988e395a1db7710f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9c30e397-a710-4013-bf42-b0dd9762b00a'], standard_attr_id=2080, status=DOWN, tags=[], tenant_id=b9f8945a2560410b988e395a1db7710f, updated_at=2026-02-20T09:55:32Z on network b1c2f50f-386f-4271-9374-ba939b86c805#033[00m Feb 20 04:55:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:55:32 localhost dnsmasq[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/addn_hosts - 2 addresses Feb 20 04:55:32 localhost podman[318774]: 2026-02-20 09:55:32.849892431 +0000 UTC m=+0.061738990 container kill ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1c2f50f-386f-4271-9374-ba939b86c805, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 20 04:55:32 localhost dnsmasq-dhcp[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/host Feb 20 04:55:32 localhost dnsmasq-dhcp[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/opts Feb 20 04:55:32 localhost nova_compute[280804]: 2026-02-20 09:55:32.887 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:32 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:32.970 2 INFO neutron.agent.securitygroups_rpc [None req-d888e083-a1eb-4717-8e36-0999aadb5157 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:33 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:33.048 263745 INFO neutron.agent.dhcp.agent [None req-59e871ee-5326-4c37-88dc-133b1e6af14e - - - - - -] DHCP configuration for ports {'3763b70e-2a08-4fd7-a31d-f5e2d841bd45'} is completed#033[00m Feb 20 04:55:33 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "snap_name": "94e02352-6a28-4288-9072-c7133a6151bf", "format": "json"}]: dispatch Feb 20 04:55:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:94e02352-6a28-4288-9072-c7133a6151bf, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:94e02352-6a28-4288-9072-c7133a6151bf, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 477 B/s rd, 2.5 KiB/s wr, 1 op/s Feb 20 04:55:34 localhost nova_compute[280804]: 2026-02-20 09:55:34.842 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:34 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:34.958 2 INFO neutron.agent.securitygroups_rpc [None req-3b974f9c-5163-47e8-89d2-00acf380ad82 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']#033[00m Feb 20 04:55:35 localhost podman[318827]: 2026-02-20 09:55:35.223605549 +0000 UTC m=+0.108598569 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:55:35 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:55:35 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:55:35 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:55:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:55:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:55:35 localhost systemd[1]: tmp-crun.u0DMYB.mount: Deactivated successfully. Feb 20 04:55:35 localhost dnsmasq[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/addn_hosts - 1 addresses Feb 20 04:55:35 localhost dnsmasq-dhcp[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/host Feb 20 04:55:35 localhost dnsmasq-dhcp[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/opts Feb 20 04:55:35 localhost podman[318839]: 2026-02-20 09:55:35.265486254 +0000 UTC m=+0.085825647 container kill ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1c2f50f-386f-4271-9374-ba939b86c805, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 20 04:55:35 localhost podman[318853]: 2026-02-20 09:55:35.401006685 +0000 UTC m=+0.149336713 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, version=9.7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., vcs-type=git) Feb 20 04:55:35 localhost podman[318853]: 2026-02-20 09:55:35.411691042 +0000 UTC m=+0.160021030 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, distribution-scope=public, container_name=openstack_network_exporter, release=1770267347, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.7, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7) Feb 20 04:55:35 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:55:35 localhost podman[318854]: 2026-02-20 09:55:35.36655991 +0000 UTC m=+0.108976229 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3) Feb 20 04:55:35 localhost podman[318854]: 2026-02-20 09:55:35.497605011 +0000 UTC m=+0.240021400 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Feb 20 04:55:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 4.4 KiB/s wr, 2 op/s Feb 20 04:55:35 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:55:36 localhost dnsmasq[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/addn_hosts - 0 addresses Feb 20 04:55:36 localhost dnsmasq-dhcp[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/host Feb 20 04:55:36 localhost podman[318926]: 2026-02-20 09:55:36.445029516 +0000 UTC m=+0.059910790 container kill ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1c2f50f-386f-4271-9374-ba939b86c805, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 20 04:55:36 localhost dnsmasq-dhcp[318481]: read /var/lib/neutron/dhcp/b1c2f50f-386f-4271-9374-ba939b86c805/opts Feb 20 04:55:36 localhost nova_compute[280804]: 2026-02-20 09:55:36.617 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:36 localhost kernel: device tap1d4a898d-87 left promiscuous mode Feb 20 04:55:36 localhost ovn_controller[155916]: 2026-02-20T09:55:36Z|00191|binding|INFO|Releasing lport 1d4a898d-8737-4915-a3c5-855f28254c0f from this chassis (sb_readonly=0) Feb 20 04:55:36 localhost ovn_controller[155916]: 2026-02-20T09:55:36Z|00192|binding|INFO|Setting lport 1d4a898d-8737-4915-a3c5-855f28254c0f down in Southbound Feb 20 04:55:36 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:36.628 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-b1c2f50f-386f-4271-9374-ba939b86c805', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1c2f50f-386f-4271-9374-ba939b86c805', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9f8945a2560410b988e395a1db7710f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1699e453-3dcd-44db-bfed-b7029758b76f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1d4a898d-8737-4915-a3c5-855f28254c0f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:36 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:36.630 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 1d4a898d-8737-4915-a3c5-855f28254c0f in datapath b1c2f50f-386f-4271-9374-ba939b86c805 unbound from our chassis#033[00m Feb 20 04:55:36 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:36.632 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1c2f50f-386f-4271-9374-ba939b86c805, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:55:36 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:36.633 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[9aff1ebe-4d6a-45bb-8ea4-591017adbaac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:36 localhost nova_compute[280804]: 2026-02-20 09:55:36.641 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:36 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "snap_name": "94e02352-6a28-4288-9072-c7133a6151bf_2956b5fe-4a3f-4e13-8c50-0bbaa50928f8", "force": true, "format": "json"}]: dispatch Feb 20 04:55:36 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:94e02352-6a28-4288-9072-c7133a6151bf_2956b5fe-4a3f-4e13-8c50-0bbaa50928f8, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:36 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta.tmp' Feb 20 04:55:36 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta.tmp' to config b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta' Feb 20 04:55:36 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:94e02352-6a28-4288-9072-c7133a6151bf_2956b5fe-4a3f-4e13-8c50-0bbaa50928f8, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:36 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "snap_name": "94e02352-6a28-4288-9072-c7133a6151bf", "force": true, "format": "json"}]: dispatch Feb 20 04:55:36 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:94e02352-6a28-4288-9072-c7133a6151bf, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:36 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta.tmp' Feb 20 04:55:36 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta.tmp' to config b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta' Feb 20 04:55:36 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:94e02352-6a28-4288-9072-c7133a6151bf, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:55:37 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/870555994' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:55:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:55:37 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/870555994' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:55:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 4.4 KiB/s wr, 1 op/s Feb 20 04:55:37 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:37.533 2 INFO neutron.agent.securitygroups_rpc [None req-a4ab168f-7329-4461-8236-8f00fb1e3c92 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:55:37 localhost nova_compute[280804]: 2026-02-20 09:55:37.924 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:38 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:55:38 localhost podman[318964]: 2026-02-20 09:55:38.290572372 +0000 UTC m=+0.057438784 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:55:38 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:55:38 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:55:38 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:38.911 2 INFO neutron.agent.securitygroups_rpc [None req-eb4533df-2883-48ce-b913-efeaaf3f9e10 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 146 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 8.7 KiB/s wr, 3 op/s Feb 20 04:55:39 localhost dnsmasq[318481]: exiting on receipt of SIGTERM Feb 20 04:55:39 localhost podman[319037]: 2026-02-20 09:55:39.780881484 +0000 UTC m=+0.061422901 container kill ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1c2f50f-386f-4271-9374-ba939b86c805, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 20 04:55:39 localhost systemd[1]: tmp-crun.FFhWqK.mount: Deactivated successfully. Feb 20 04:55:39 localhost systemd[1]: libpod-ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4.scope: Deactivated successfully. Feb 20 04:55:39 localhost podman[319051]: 2026-02-20 09:55:39.854541023 +0000 UTC m=+0.060091905 container died ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1c2f50f-386f-4271-9374-ba939b86c805, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:55:39 localhost nova_compute[280804]: 2026-02-20 09:55:39.882 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:39 localhost systemd[1]: tmp-crun.y1ssD9.mount: Deactivated successfully. Feb 20 04:55:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "snap_name": "76272f77-26a4-41f3-97ff-7d9d42de32a4_f845adef-9d7e-4723-a4a0-91acd19cabbe", "force": true, "format": "json"}]: dispatch Feb 20 04:55:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76272f77-26a4-41f3-97ff-7d9d42de32a4_f845adef-9d7e-4723-a4a0-91acd19cabbe, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:39 localhost podman[319051]: 2026-02-20 09:55:39.923852416 +0000 UTC m=+0.129403258 container cleanup ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1c2f50f-386f-4271-9374-ba939b86c805, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:55:39 localhost systemd[1]: libpod-conmon-ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4.scope: Deactivated successfully. Feb 20 04:55:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta.tmp' Feb 20 04:55:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta.tmp' to config b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta' Feb 20 04:55:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76272f77-26a4-41f3-97ff-7d9d42de32a4_f845adef-9d7e-4723-a4a0-91acd19cabbe, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "snap_name": "76272f77-26a4-41f3-97ff-7d9d42de32a4", "force": true, "format": "json"}]: dispatch Feb 20 04:55:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76272f77-26a4-41f3-97ff-7d9d42de32a4, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:39 localhost podman[319053]: 2026-02-20 09:55:39.946463853 +0000 UTC m=+0.142278954 container remove ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1c2f50f-386f-4271-9374-ba939b86c805, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true) Feb 20 04:55:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta.tmp' Feb 20 04:55:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta.tmp' to config b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5/.meta' Feb 20 04:55:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76272f77-26a4-41f3-97ff-7d9d42de32a4, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:40 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:40.279 263745 INFO neutron.agent.dhcp.agent [None req-cefacb3f-4e8e-4d5f-957a-a4c40345793e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:40 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:55:40 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:55:40 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:55:40 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:55:40 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:55:40 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:55:40 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 91321536-2259-4073-a440-1ef31dc3f3a9 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:55:40 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 91321536-2259-4073-a440-1ef31dc3f3a9 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:55:40 localhost ceph-mgr[286565]: [progress INFO root] Completed event 91321536-2259-4073-a440-1ef31dc3f3a9 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:55:40 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:55:40 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:55:40 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:40.393 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:40 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:55:40 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:55:40 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:40.630 2 INFO neutron.agent.securitygroups_rpc [None req-7a51814a-d3bb-4d9f-a2cf-a8e1904feac9 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:40 localhost systemd[1]: var-lib-containers-storage-overlay-a2111e911fb9c56bc583929b001f2f586b1d92dc0d4b50c3e357656b43f1dc06-merged.mount: Deactivated successfully. Feb 20 04:55:40 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba849199177fdae151660c67ca44f352c37af151679d99b4213bf8b333734cf4-userdata-shm.mount: Deactivated successfully. Feb 20 04:55:40 localhost systemd[1]: run-netns-qdhcp\x2db1c2f50f\x2d386f\x2d4271\x2d9374\x2dba939b86c805.mount: Deactivated successfully. Feb 20 04:55:41 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:41.062 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 146 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 7.1 KiB/s wr, 16 op/s Feb 20 04:55:41 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:41.679 2 INFO neutron.agent.securitygroups_rpc [None req-45da2e31-24b0-4a9e-8fbd-b3074d839731 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:41 localhost nova_compute[280804]: 2026-02-20 09:55:41.786 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e149 do_prune osdmap full prune enabled Feb 20 04:55:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e150 e150: 6 total, 6 up, 6 in Feb 20 04:55:42 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e150: 6 total, 6 up, 6 in Feb 20 04:55:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:55:42 localhost nova_compute[280804]: 2026-02-20 09:55:42.957 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:43 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "format": "json"}]: dispatch Feb 20 04:55:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:55:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:55:43 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1ece01ec-80d8-4513-b3ec-6bcff358bcd5' of type subvolume Feb 20 04:55:43 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:55:43.179+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1ece01ec-80d8-4513-b3ec-6bcff358bcd5' of type subvolume Feb 20 04:55:43 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "force": true, "format": "json"}]: dispatch Feb 20 04:55:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:43 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1ece01ec-80d8-4513-b3ec-6bcff358bcd5'' moved to trashcan Feb 20 04:55:43 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:55:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1ece01ec-80d8-4513-b3ec-6bcff358bcd5, vol_name:cephfs) < "" Feb 20 04:55:43 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:43.319 2 INFO neutron.agent.securitygroups_rpc [None req-b9b9f9d2-32dc-4ef0-9ebd-12b2d2ae4ee8 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']#033[00m Feb 20 04:55:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 146 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 8.5 KiB/s wr, 20 op/s Feb 20 04:55:43 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:55:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:55:43 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:55:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e150 do_prune osdmap full prune enabled Feb 20 04:55:43 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:55:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e151 e151: 6 total, 6 up, 6 in Feb 20 04:55:43 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e151: 6 total, 6 up, 6 in Feb 20 04:55:44 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:44.164 2 INFO neutron.agent.securitygroups_rpc [None req-6065317f-9707-4b49-a2f9-16f062a24577 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']#033[00m Feb 20 04:55:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:55:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:55:44 localhost systemd[1]: tmp-crun.dJ4Sol.mount: Deactivated successfully. Feb 20 04:55:44 localhost podman[319131]: 2026-02-20 09:55:44.482963019 +0000 UTC m=+0.110776497 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:55:44 localhost podman[319131]: 2026-02-20 09:55:44.51798615 +0000 UTC m=+0.145799628 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:55:44 localhost systemd[1]: tmp-crun.WBgTQ3.mount: Deactivated successfully. Feb 20 04:55:44 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:55:44 localhost podman[319130]: 2026-02-20 09:55:44.531148434 +0000 UTC m=+0.159622600 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:55:44 localhost podman[319130]: 2026-02-20 09:55:44.611076521 +0000 UTC m=+0.239550697 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:55:44 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:55:44 localhost nova_compute[280804]: 2026-02-20 09:55:44.885 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 146 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 23 KiB/s wr, 27 op/s Feb 20 04:55:46 localhost podman[241347]: time="2026-02-20T09:55:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:55:46 localhost podman[241347]: @ - - [20/Feb/2026:09:55:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:55:46 localhost podman[241347]: @ - - [20/Feb/2026:09:55:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18780 "" "Go-http-client/1.1" Feb 20 04:55:46 localhost sshd[319172]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:55:46 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:46.979 263745 INFO neutron.agent.linux.ip_lib [None req-f42ee04a-ce15-4940-b603-31cf0a6a46b8 - - - - - -] Device tap678e8901-1e cannot be used as it has no MAC address#033[00m Feb 20 04:55:47 localhost nova_compute[280804]: 2026-02-20 09:55:47.051 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:47 localhost kernel: device tap678e8901-1e entered promiscuous mode Feb 20 04:55:47 localhost NetworkManager[5967]: [1771581347.0613] manager: (tap678e8901-1e): new Generic device (/org/freedesktop/NetworkManager/Devices/40) Feb 20 04:55:47 localhost ovn_controller[155916]: 2026-02-20T09:55:47Z|00193|binding|INFO|Claiming lport 678e8901-1e8e-406c-b267-b50b42b174ca for this chassis. Feb 20 04:55:47 localhost ovn_controller[155916]: 2026-02-20T09:55:47Z|00194|binding|INFO|678e8901-1e8e-406c-b267-b50b42b174ca: Claiming unknown Feb 20 04:55:47 localhost systemd-udevd[319184]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:55:47 localhost nova_compute[280804]: 2026-02-20 09:55:47.068 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:47.078 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a09b3407c3143c8ae0948ccb18c1e61', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae69ad58-c064-4586-906c-a6e022655998, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=678e8901-1e8e-406c-b267-b50b42b174ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:47.080 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 678e8901-1e8e-406c-b267-b50b42b174ca in datapath dcdf821a-64cc-4f8b-8d45-21ed4ab7881f bound to our chassis#033[00m Feb 20 04:55:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:47.081 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dcdf821a-64cc-4f8b-8d45-21ed4ab7881f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:55:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:47.085 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[d17df8ab-cecb-42a5-9906-e8bc438f70f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:47 localhost journal[229367]: ethtool ioctl error on tap678e8901-1e: No such device Feb 20 04:55:47 localhost journal[229367]: ethtool ioctl error on tap678e8901-1e: No such device Feb 20 04:55:47 localhost ovn_controller[155916]: 2026-02-20T09:55:47Z|00195|binding|INFO|Setting lport 678e8901-1e8e-406c-b267-b50b42b174ca ovn-installed in OVS Feb 20 04:55:47 localhost ovn_controller[155916]: 2026-02-20T09:55:47Z|00196|binding|INFO|Setting lport 678e8901-1e8e-406c-b267-b50b42b174ca up in Southbound Feb 20 04:55:47 localhost journal[229367]: ethtool ioctl error on tap678e8901-1e: No such device Feb 20 04:55:47 localhost nova_compute[280804]: 2026-02-20 09:55:47.109 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:47 localhost journal[229367]: ethtool ioctl error on tap678e8901-1e: No such device Feb 20 04:55:47 localhost journal[229367]: ethtool ioctl error on tap678e8901-1e: No such device Feb 20 04:55:47 localhost journal[229367]: ethtool ioctl error on tap678e8901-1e: No such device Feb 20 04:55:47 localhost journal[229367]: ethtool ioctl error on tap678e8901-1e: No such device Feb 20 04:55:47 localhost journal[229367]: ethtool ioctl error on tap678e8901-1e: No such device Feb 20 04:55:47 localhost nova_compute[280804]: 2026-02-20 09:55:47.141 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:47.163 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2 2001:db8::f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:47.165 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated#033[00m Feb 20 04:55:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:47.168 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:55:47 localhost nova_compute[280804]: 2026-02-20 09:55:47.168 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:47 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:47.169 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba04d37-ba92-420e-8ad0-daac7f8e722f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:47 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:47.493 2 INFO neutron.agent.securitygroups_rpc [None req-4aebfeae-a91f-4757-b68b-fe43601c173b 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 146 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 16 KiB/s wr, 24 op/s Feb 20 04:55:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:55:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:55:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e151 do_prune osdmap full prune enabled Feb 20 04:55:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e152 e152: 6 total, 6 up, 6 in Feb 20 04:55:47 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e152: 6 total, 6 up, 6 in Feb 20 04:55:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:55:47 localhost podman[319248]: 2026-02-20 09:55:47.947945066 +0000 UTC m=+0.080078702 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:55:47 localhost nova_compute[280804]: 2026-02-20 09:55:47.959 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:47 localhost podman[319248]: 2026-02-20 09:55:47.960786712 +0000 UTC m=+0.092920358 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:55:47 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:55:48 localhost podman[319265]: Feb 20 04:55:48 localhost podman[319265]: 2026-02-20 09:55:48.023974779 +0000 UTC m=+0.101968060 container create d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:55:48 localhost systemd[1]: Started libpod-conmon-d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e.scope. Feb 20 04:55:48 localhost podman[319265]: 2026-02-20 09:55:47.981012765 +0000 UTC m=+0.059006056 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:55:48 localhost systemd[1]: Started libcrun container. Feb 20 04:55:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623fcb43301c4a439ef26931824d03cabb093bd08098de81cdfb8bc3a7aed9d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:55:48 localhost podman[319265]: 2026-02-20 09:55:48.100805164 +0000 UTC m=+0.178798405 container init d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 20 04:55:48 localhost podman[319265]: 2026-02-20 09:55:48.112183879 +0000 UTC m=+0.190177120 container start d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 20 04:55:48 localhost dnsmasq[319296]: started, version 2.85 cachesize 150 Feb 20 04:55:48 localhost dnsmasq[319296]: DNS service limited to local subnets Feb 20 04:55:48 localhost dnsmasq[319296]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:55:48 localhost dnsmasq[319296]: warning: no upstream servers configured Feb 20 04:55:48 localhost dnsmasq-dhcp[319296]: DHCPv6, static leases only on 2001:db8::, lease time 1d Feb 20 04:55:48 localhost dnsmasq[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/addn_hosts - 0 addresses Feb 20 04:55:48 localhost dnsmasq-dhcp[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/host Feb 20 04:55:48 localhost dnsmasq-dhcp[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/opts Feb 20 04:55:48 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:48.159 263745 INFO neutron.agent.dhcp.agent [None req-f42ee04a-ce15-4940-b603-31cf0a6a46b8 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:46Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=397aa88f-4b6e-4708-86db-e066e28dd9ec, ip_allocation=immediate, mac_address=fa:16:3e:b9:56:35, name=tempest-PortsIpV6TestJSON-2036434554, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:44Z, description=, dns_domain=, id=dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-617029853, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=41807, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2138, status=ACTIVE, subnets=['f5be945d-acff-4bc6-a124-8368383780fc'], tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:55:45Z, vlan_transparent=None, network_id=dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['b2e5856c-f1df-4bbc-8f9c-41698aa249c6'], standard_attr_id=2156, status=DOWN, tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:55:46Z on network dcdf821a-64cc-4f8b-8d45-21ed4ab7881f#033[00m Feb 20 04:55:48 localhost dnsmasq[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/addn_hosts - 1 addresses Feb 20 04:55:48 localhost dnsmasq-dhcp[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/host Feb 20 04:55:48 localhost dnsmasq-dhcp[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/opts Feb 20 04:55:48 localhost podman[319314]: 2026-02-20 09:55:48.30759481 +0000 UTC m=+0.053639582 container kill d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:55:48 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:48.359 263745 INFO neutron.agent.dhcp.agent [None req-3568635d-7b2d-417f-a1d4-6146ac99aa38 - - - - - -] DHCP configuration for ports {'c4cbdc50-66da-4589-bdbe-85ee079d4ef9'} is completed#033[00m Feb 20 04:55:48 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:48.510 263745 INFO neutron.agent.dhcp.agent [None req-31df1bf7-a1ea-458b-ac46-58796d68841a - - - - - -] DHCP configuration for ports {'397aa88f-4b6e-4708-86db-e066e28dd9ec'} is completed#033[00m Feb 20 04:55:48 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 04:55:48 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:55:48 localhost podman[319353]: 2026-02-20 09:55:48.515612809 +0000 UTC m=+0.059417907 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 20 04:55:48 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:55:48 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:48.622 2 INFO neutron.agent.securitygroups_rpc [None req-f40f35c0-4148-4e05-a7c2-3455638d7684 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:55:48 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:48.998 2 INFO neutron.agent.securitygroups_rpc [None req-b8e91bcb-8145-4128-b349-c2aa5e79d87e 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:49 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:49.081 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:48Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1bfe5f8c-2dd2-4624-b60f-955e8fb7f762, ip_allocation=immediate, mac_address=fa:16:3e:26:68:4f, name=tempest-PortsIpV6TestJSON-751630672, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:44Z, description=, dns_domain=, id=dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-617029853, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=41807, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2138, status=ACTIVE, subnets=['f5be945d-acff-4bc6-a124-8368383780fc'], tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:55:45Z, vlan_transparent=None, network_id=dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['b2e5856c-f1df-4bbc-8f9c-41698aa249c6'], standard_attr_id=2168, status=DOWN, tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:55:48Z on network dcdf821a-64cc-4f8b-8d45-21ed4ab7881f#033[00m Feb 20 04:55:49 localhost dnsmasq[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/addn_hosts - 2 addresses Feb 20 04:55:49 localhost dnsmasq-dhcp[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/host Feb 20 04:55:49 localhost dnsmasq-dhcp[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/opts Feb 20 04:55:49 localhost podman[319391]: 2026-02-20 09:55:49.284308002 +0000 UTC m=+0.067432122 container kill d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:55:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 146 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 19 KiB/s wr, 6 op/s Feb 20 04:55:49 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:49.566 263745 INFO neutron.agent.dhcp.agent [None req-ef7254d1-db0a-483a-bdd4-84a73c8adc83 - - - - - -] DHCP configuration for ports {'1bfe5f8c-2dd2-4624-b60f-955e8fb7f762'} is completed#033[00m Feb 20 04:55:49 localhost nova_compute[280804]: 2026-02-20 09:55:49.889 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:50 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:50.057 2 INFO neutron.agent.securitygroups_rpc [None req-005508de-f69c-4da2-9936-4351a4d76fde 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:50 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:50.182 2 INFO neutron.agent.securitygroups_rpc [None req-8c2a2fb8-3a21-4d90-b744-6c75dba74fae f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:55:50 localhost podman[319430]: 2026-02-20 09:55:50.280585311 +0000 UTC m=+0.059538331 container kill d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:55:50 localhost dnsmasq[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/addn_hosts - 1 addresses Feb 20 04:55:50 localhost dnsmasq-dhcp[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/host Feb 20 04:55:50 localhost dnsmasq-dhcp[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/opts Feb 20 04:55:50 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:50.918 2 INFO neutron.agent.securitygroups_rpc [None req-8cf476ac-f2bb-4715-a401-741101924898 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:51 localhost dnsmasq[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/addn_hosts - 0 addresses Feb 20 04:55:51 localhost dnsmasq-dhcp[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/host Feb 20 04:55:51 localhost dnsmasq-dhcp[319296]: read /var/lib/neutron/dhcp/dcdf821a-64cc-4f8b-8d45-21ed4ab7881f/opts Feb 20 04:55:51 localhost podman[319469]: 2026-02-20 09:55:51.136038924 +0000 UTC m=+0.051126524 container kill d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:55:51 localhost nova_compute[280804]: 2026-02-20 09:55:51.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:55:51 localhost nova_compute[280804]: 2026-02-20 09:55:51.512 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 20 04:55:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v288: 177 pgs: 177 active+clean; 146 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 16 KiB/s wr, 14 op/s Feb 20 04:55:51 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:55:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, vol_name:cephfs) < "" Feb 20 04:55:51 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta.tmp' Feb 20 04:55:51 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta.tmp' to config b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta' Feb 20 04:55:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, vol_name:cephfs) < "" Feb 20 04:55:51 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "format": "json"}]: dispatch Feb 20 04:55:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, vol_name:cephfs) < "" Feb 20 04:55:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, vol_name:cephfs) < "" Feb 20 04:55:51 localhost dnsmasq[319296]: exiting on receipt of SIGTERM Feb 20 04:55:51 localhost podman[319507]: 2026-02-20 09:55:51.796171631 +0000 UTC m=+0.056181310 container kill d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 20 04:55:51 localhost systemd[1]: libpod-d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e.scope: Deactivated successfully. Feb 20 04:55:51 localhost podman[319519]: 2026-02-20 09:55:51.877487127 +0000 UTC m=+0.065016559 container died d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 20 04:55:51 localhost systemd[1]: tmp-crun.2gNVxE.mount: Deactivated successfully. Feb 20 04:55:51 localhost podman[319519]: 2026-02-20 09:55:51.913754381 +0000 UTC m=+0.101283773 container cleanup d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127) Feb 20 04:55:51 localhost systemd[1]: libpod-conmon-d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e.scope: Deactivated successfully. Feb 20 04:55:51 localhost podman[319521]: 2026-02-20 09:55:51.95314577 +0000 UTC m=+0.131971698 container remove d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:55:51 localhost ovn_controller[155916]: 2026-02-20T09:55:51Z|00197|binding|INFO|Releasing lport 678e8901-1e8e-406c-b267-b50b42b174ca from this chassis (sb_readonly=0) Feb 20 04:55:51 localhost nova_compute[280804]: 2026-02-20 09:55:51.967 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:51 localhost ovn_controller[155916]: 2026-02-20T09:55:51Z|00198|binding|INFO|Setting lport 678e8901-1e8e-406c-b267-b50b42b174ca down in Southbound Feb 20 04:55:51 localhost kernel: device tap678e8901-1e left promiscuous mode Feb 20 04:55:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:51.976 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dcdf821a-64cc-4f8b-8d45-21ed4ab7881f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a09b3407c3143c8ae0948ccb18c1e61', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae69ad58-c064-4586-906c-a6e022655998, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=678e8901-1e8e-406c-b267-b50b42b174ca) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:51.978 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 678e8901-1e8e-406c-b267-b50b42b174ca in datapath dcdf821a-64cc-4f8b-8d45-21ed4ab7881f unbound from our chassis#033[00m Feb 20 04:55:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:51.980 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dcdf821a-64cc-4f8b-8d45-21ed4ab7881f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:55:51 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:51.981 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe80f5b-6e02-4838-97e2-9d7ca44d5bb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:51 localhost nova_compute[280804]: 2026-02-20 09:55:51.993 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:52 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:52.011 263745 INFO neutron.agent.dhcp.agent [None req-4fe7eddc-4f87-4d66-99d0-959872969d25 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:52 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:52.225 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:55:52 localhost nova_compute[280804]: 2026-02-20 09:55:52.505 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:52.624 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 2001:db8::f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:52.626 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated#033[00m Feb 20 04:55:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:52.629 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:55:52 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:52.630 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[c146d2c0-5122-46a0-8293-5ee91dcfd4fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:55:52 localhost systemd[1]: var-lib-containers-storage-overlay-623fcb43301c4a439ef26931824d03cabb093bd08098de81cdfb8bc3a7aed9d2-merged.mount: Deactivated successfully. Feb 20 04:55:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9990f5dd46538fe423112bc9b901015be74513a3f0a797b3b31938405892b4e-userdata-shm.mount: Deactivated successfully. Feb 20 04:55:52 localhost systemd[1]: run-netns-qdhcp\x2ddcdf821a\x2d64cc\x2d4f8b\x2d8d45\x2d21ed4ab7881f.mount: Deactivated successfully. Feb 20 04:55:53 localhost nova_compute[280804]: 2026-02-20 09:55:53.009 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:55:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:55:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:55:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:55:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:55:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:55:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 146 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 13 KiB/s wr, 11 op/s Feb 20 04:55:54 localhost nova_compute[280804]: 2026-02-20 09:55:54.918 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:55 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:55.010 263745 INFO neutron.agent.linux.ip_lib [None req-e8affa53-0ab6-4cfb-af58-b155bc8a5e3e - - - - - -] Device tapea48be00-a2 cannot be used as it has no MAC address#033[00m Feb 20 04:55:55 localhost nova_compute[280804]: 2026-02-20 09:55:55.040 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:55 localhost kernel: device tapea48be00-a2 entered promiscuous mode Feb 20 04:55:55 localhost NetworkManager[5967]: [1771581355.0507] manager: (tapea48be00-a2): new Generic device (/org/freedesktop/NetworkManager/Devices/41) Feb 20 04:55:55 localhost ovn_controller[155916]: 2026-02-20T09:55:55Z|00199|binding|INFO|Claiming lport ea48be00-a2cd-4321-9150-d0fc187f96bf for this chassis. Feb 20 04:55:55 localhost ovn_controller[155916]: 2026-02-20T09:55:55Z|00200|binding|INFO|ea48be00-a2cd-4321-9150-d0fc187f96bf: Claiming unknown Feb 20 04:55:55 localhost nova_compute[280804]: 2026-02-20 09:55:55.052 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:55 localhost systemd-udevd[319559]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:55:55 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:55.066 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-11635c27-2f1f-4cdc-b82c-c6286cc4d35d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11635c27-2f1f-4cdc-b82c-c6286cc4d35d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8aa5b5a34cfe458d96fea87261361db1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8e7d65-d3d5-471b-9cfb-e06b83af88aa, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ea48be00-a2cd-4321-9150-d0fc187f96bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:55 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:55.068 161766 INFO neutron.agent.ovn.metadata.agent [-] Port ea48be00-a2cd-4321-9150-d0fc187f96bf in datapath 11635c27-2f1f-4cdc-b82c-c6286cc4d35d bound to our chassis#033[00m Feb 20 04:55:55 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:55.072 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port 50ad488c-947f-4c23-9020-fdac680a8a5a IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:55:55 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:55.072 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 11635c27-2f1f-4cdc-b82c-c6286cc4d35d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:55:55 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:55.074 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[01531d7d-bb93-429b-bd08-9e740d192186]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:55 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "snap_name": "da20d4b5-2abc-49bc-a78c-39ce3cdadf16", "format": "json"}]: dispatch Feb 20 04:55:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da20d4b5-2abc-49bc-a78c-39ce3cdadf16, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, vol_name:cephfs) < "" Feb 20 04:55:55 localhost journal[229367]: ethtool ioctl error on tapea48be00-a2: No such device Feb 20 04:55:55 localhost journal[229367]: ethtool ioctl error on tapea48be00-a2: No such device Feb 20 04:55:55 localhost ovn_controller[155916]: 2026-02-20T09:55:55Z|00201|binding|INFO|Setting lport ea48be00-a2cd-4321-9150-d0fc187f96bf ovn-installed in OVS Feb 20 04:55:55 localhost ovn_controller[155916]: 2026-02-20T09:55:55Z|00202|binding|INFO|Setting lport ea48be00-a2cd-4321-9150-d0fc187f96bf up in Southbound Feb 20 04:55:55 localhost journal[229367]: ethtool ioctl error on tapea48be00-a2: No such device Feb 20 04:55:55 localhost nova_compute[280804]: 2026-02-20 09:55:55.095 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:55 localhost journal[229367]: ethtool ioctl error on tapea48be00-a2: No such device Feb 20 04:55:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da20d4b5-2abc-49bc-a78c-39ce3cdadf16, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, vol_name:cephfs) < "" Feb 20 04:55:55 localhost journal[229367]: ethtool ioctl error on tapea48be00-a2: No such device Feb 20 04:55:55 localhost journal[229367]: ethtool ioctl error on tapea48be00-a2: No such device Feb 20 04:55:55 localhost journal[229367]: ethtool ioctl error on tapea48be00-a2: No such device Feb 20 04:55:55 localhost journal[229367]: ethtool ioctl error on tapea48be00-a2: No such device Feb 20 04:55:55 localhost nova_compute[280804]: 2026-02-20 09:55:55.133 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:55 localhost nova_compute[280804]: 2026-02-20 09:55:55.163 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 192 MiB data, 856 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 52 op/s Feb 20 04:55:56 localhost sshd[319623]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:55:56 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:56.011 2 INFO neutron.agent.securitygroups_rpc [None req-29e4bc85-2be1-46ee-a8e4-a169ea695f47 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:55:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:56.032 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2 2001:db8::f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:56.034 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated#033[00m Feb 20 04:55:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:56.037 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:55:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:56.038 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[69b62a44-989a-45b3-9b0c-90bc70fbac2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:55:56 localhost podman[319632]: Feb 20 04:55:56 localhost podman[319632]: 2026-02-20 09:55:56.134099775 +0000 UTC m=+0.095171089 container create 0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11635c27-2f1f-4cdc-b82c-c6286cc4d35d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:55:56 localhost systemd[1]: Started libpod-conmon-0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b.scope. Feb 20 04:55:56 localhost systemd[1]: tmp-crun.iUfeOz.mount: Deactivated successfully. Feb 20 04:55:56 localhost podman[319632]: 2026-02-20 09:55:56.090304232 +0000 UTC m=+0.051375556 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:55:56 localhost systemd[1]: Started libcrun container. Feb 20 04:55:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cecfd5375396ec077518f56ec9d4e6b7688786b1044734124129e9e3b994a25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:55:56 localhost podman[319632]: 2026-02-20 09:55:56.218865429 +0000 UTC m=+0.179936713 container init 0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11635c27-2f1f-4cdc-b82c-c6286cc4d35d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:55:56 localhost podman[319632]: 2026-02-20 09:55:56.230030755 +0000 UTC m=+0.191102039 container start 0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11635c27-2f1f-4cdc-b82c-c6286cc4d35d, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 20 04:55:56 localhost dnsmasq[319650]: started, version 2.85 cachesize 150 Feb 20 04:55:56 localhost dnsmasq[319650]: DNS service limited to local subnets Feb 20 04:55:56 localhost dnsmasq[319650]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:55:56 localhost dnsmasq[319650]: warning: no upstream servers configured Feb 20 04:55:56 localhost dnsmasq-dhcp[319650]: DHCP, static leases only on 10.101.0.0, lease time 1d Feb 20 04:55:56 localhost dnsmasq[319650]: read /var/lib/neutron/dhcp/11635c27-2f1f-4cdc-b82c-c6286cc4d35d/addn_hosts - 0 addresses Feb 20 04:55:56 localhost dnsmasq-dhcp[319650]: read /var/lib/neutron/dhcp/11635c27-2f1f-4cdc-b82c-c6286cc4d35d/host Feb 20 04:55:56 localhost dnsmasq-dhcp[319650]: read /var/lib/neutron/dhcp/11635c27-2f1f-4cdc-b82c-c6286cc4d35d/opts Feb 20 04:55:56 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:56.315 263745 INFO neutron.agent.dhcp.agent [None req-52937846-c283-4e5b-bcaa-9312616b8bb1 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:54Z, description=, device_id=40707009-5dc5-44c2-8d25-acba20c2e4ac, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=175d834a-2641-4afb-be34-637aba597c2f, ip_allocation=immediate, mac_address=fa:16:3e:5d:48:3d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:52Z, description=, dns_domain=, id=11635c27-2f1f-4cdc-b82c-c6286cc4d35d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1332813085, port_security_enabled=True, project_id=8aa5b5a34cfe458d96fea87261361db1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40579, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2180, status=ACTIVE, subnets=['547816d0-b166-49f8-b480-0886a9e084b9'], tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:55:53Z, vlan_transparent=None, network_id=11635c27-2f1f-4cdc-b82c-c6286cc4d35d, port_security_enabled=False, project_id=8aa5b5a34cfe458d96fea87261361db1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2205, status=DOWN, tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:55:54Z on network 11635c27-2f1f-4cdc-b82c-c6286cc4d35d#033[00m Feb 20 04:55:56 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:56.377 263745 INFO neutron.agent.dhcp.agent [None req-dc27b650-5269-4ad7-946e-9f55e29ba4d5 - - - - - -] DHCP configuration for ports {'20567b70-e3f0-4d46-9438-aad289a34205'} is completed#033[00m Feb 20 04:55:56 localhost dnsmasq[319650]: read /var/lib/neutron/dhcp/11635c27-2f1f-4cdc-b82c-c6286cc4d35d/addn_hosts - 1 addresses Feb 20 04:55:56 localhost dnsmasq-dhcp[319650]: read /var/lib/neutron/dhcp/11635c27-2f1f-4cdc-b82c-c6286cc4d35d/host Feb 20 04:55:56 localhost podman[319667]: 2026-02-20 09:55:56.538362309 +0000 UTC m=+0.060415607 container kill 0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11635c27-2f1f-4cdc-b82c-c6286cc4d35d, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 20 04:55:56 localhost dnsmasq-dhcp[319650]: read /var/lib/neutron/dhcp/11635c27-2f1f-4cdc-b82c-c6286cc4d35d/opts Feb 20 04:55:56 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:56.539 2 INFO neutron.agent.securitygroups_rpc [None req-35ffc450-9844-472b-bd23-e1de49029696 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:55:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:56.751 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:55:56 localhost nova_compute[280804]: 2026-02-20 09:55:56.751 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:56 localhost ovn_metadata_agent[161761]: 2026-02-20 09:55:56.754 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 04:55:56 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:56.818 263745 INFO neutron.agent.dhcp.agent [None req-e544f947-f420-47a5-8a56-5f04bd827262 - - - - - -] DHCP configuration for ports {'175d834a-2641-4afb-be34-637aba597c2f'} is completed#033[00m Feb 20 04:55:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:55:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:55:56 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:55:56 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:55:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:55:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "format": "json"}]: dispatch Feb 20 04:55:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:55:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:55:56 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:56.991 2 INFO neutron.agent.securitygroups_rpc [None req-add9b10e-24c8-47e6-9727-38256205ffd5 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:55:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:55:57 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:57.100 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:54Z, description=, device_id=40707009-5dc5-44c2-8d25-acba20c2e4ac, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=175d834a-2641-4afb-be34-637aba597c2f, ip_allocation=immediate, mac_address=fa:16:3e:5d:48:3d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:52Z, description=, dns_domain=, id=11635c27-2f1f-4cdc-b82c-c6286cc4d35d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1332813085, port_security_enabled=True, project_id=8aa5b5a34cfe458d96fea87261361db1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40579, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2180, status=ACTIVE, subnets=['547816d0-b166-49f8-b480-0886a9e084b9'], tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:55:53Z, vlan_transparent=None, network_id=11635c27-2f1f-4cdc-b82c-c6286cc4d35d, port_security_enabled=False, project_id=8aa5b5a34cfe458d96fea87261361db1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2205, status=DOWN, tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:55:54Z on network 11635c27-2f1f-4cdc-b82c-c6286cc4d35d#033[00m Feb 20 04:55:57 localhost podman[319690]: 2026-02-20 09:55:57.206845524 +0000 UTC m=+0.095945870 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:55:57 localhost podman[319690]: 2026-02-20 09:55:57.221793352 +0000 UTC m=+0.110893738 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:55:57 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:55:57 localhost dnsmasq[319650]: read /var/lib/neutron/dhcp/11635c27-2f1f-4cdc-b82c-c6286cc4d35d/addn_hosts - 1 addresses Feb 20 04:55:57 localhost dnsmasq-dhcp[319650]: read /var/lib/neutron/dhcp/11635c27-2f1f-4cdc-b82c-c6286cc4d35d/host Feb 20 04:55:57 localhost podman[319728]: 2026-02-20 09:55:57.353642186 +0000 UTC m=+0.068820090 container kill 0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11635c27-2f1f-4cdc-b82c-c6286cc4d35d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 20 04:55:57 localhost dnsmasq-dhcp[319650]: read /var/lib/neutron/dhcp/11635c27-2f1f-4cdc-b82c-c6286cc4d35d/opts Feb 20 04:55:57 localhost nova_compute[280804]: 2026-02-20 09:55:57.488 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 192 MiB data, 856 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 52 op/s Feb 20 04:55:57 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:55:57.673 263745 INFO neutron.agent.dhcp.agent [None req-9f5053c1-0245-4de1-8469-b6fed900a831 - - - - - -] DHCP configuration for ports {'175d834a-2641-4afb-be34-637aba597c2f'} is completed#033[00m Feb 20 04:55:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:55:58 localhost nova_compute[280804]: 2026-02-20 09:55:58.010 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:58 localhost openstack_network_exporter[243776]: ERROR 09:55:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:55:58 localhost openstack_network_exporter[243776]: Feb 20 04:55:58 localhost openstack_network_exporter[243776]: ERROR 09:55:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:55:58 localhost openstack_network_exporter[243776]: Feb 20 04:55:58 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:58.487 2 INFO neutron.agent.securitygroups_rpc [None req-a5703a13-6375-4e7f-aba2-f531a9b12f0a f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:55:58 localhost nova_compute[280804]: 2026-02-20 09:55:58.531 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:55:58 localhost nova_compute[280804]: 2026-02-20 09:55:58.532 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:55:58 localhost nova_compute[280804]: 2026-02-20 09:55:58.532 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:55:58 localhost nova_compute[280804]: 2026-02-20 09:55:58.568 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:55:58 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "snap_name": "da20d4b5-2abc-49bc-a78c-39ce3cdadf16", "target_sub_name": "f9ac42b7-680c-41fc-8784-6176baa738f7", "format": "json"}]: dispatch Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:da20d4b5-2abc-49bc-a78c-39ce3cdadf16, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, target_sub_name:f9ac42b7-680c-41fc-8784-6176baa738f7, vol_name:cephfs) < "" Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/.meta.tmp' Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/.meta.tmp' to config b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/.meta' Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 37fdf36b-6344-4ae3-b850-b833ae6925fb for path b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7' Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta.tmp' Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta.tmp' to config b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta' Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:da20d4b5-2abc-49bc-a78c-39ce3cdadf16, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, target_sub_name:f9ac42b7-680c-41fc-8784-6176baa738f7, vol_name:cephfs) < "" Feb 20 04:55:58 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9ac42b7-680c-41fc-8784-6176baa738f7", "format": "json"}]: dispatch Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f9ac42b7-680c-41fc-8784-6176baa738f7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:55:58 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:55:58.832+0000 7f74594e2640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:55:58.832+0000 7f74594e2640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:55:58.832+0000 7f74594e2640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:55:58.832+0000 7f74594e2640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:55:58.832+0000 7f74594e2640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f9ac42b7-680c-41fc-8784-6176baa738f7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7 Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, f9ac42b7-680c-41fc-8784-6176baa738f7) Feb 20 04:55:58 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:55:58.866+0000 7f7457cdf640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:55:58.866+0000 7f7457cdf640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:55:58.866+0000 7f7457cdf640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:55:58.866+0000 7f7457cdf640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:55:58.866+0000 7f7457cdf640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 04:55:58 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:58.871 2 INFO neutron.agent.securitygroups_rpc [None req-ca73f77f-8256-4f4e-b317-bc2e72fd527f 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, f9ac42b7-680c-41fc-8784-6176baa738f7) -- by 0 seconds Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/.meta.tmp' Feb 20 04:55:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/.meta.tmp' to config b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/.meta' Feb 20 04:55:59 localhost sshd[319772]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:55:59 localhost nova_compute[280804]: 2026-02-20 09:55:59.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:55:59 localhost nova_compute[280804]: 2026-02-20 09:55:59.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:55:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 177 MiB data, 855 MiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 58 op/s Feb 20 04:55:59 localhost neutron_sriov_agent[256551]: 2026-02-20 09:55:59.676 2 INFO neutron.agent.securitygroups_rpc [None req-ca73f77f-8256-4f4e-b317-bc2e72fd527f 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:55:59 localhost nova_compute[280804]: 2026-02-20 09:55:59.925 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:55:59 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e49: np0005625202.arwxwo(active, since 7m), standbys: np0005625203.lonygy, np0005625204.exgrzx Feb 20 04:56:00 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e1c014f1-0af0-4079-b9b5-c123fb6102a7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:56:00 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e1c014f1-0af0-4079-b9b5-c123fb6102a7, vol_name:cephfs) < "" Feb 20 04:56:00 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:00.500 2 INFO neutron.agent.securitygroups_rpc [None req-78688fc9-1f65-4ea8-8870-27e8d247cb32 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 146 MiB data, 806 MiB used, 41 GiB / 42 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 60 op/s Feb 20 04:56:01 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:01.524 263745 INFO neutron.agent.linux.ip_lib [None req-0ea4ed38-bf21-49c7-ad01-ba3e5c88072c - - - - - -] Device tap4587225c-83 cannot be used as it has no MAC address#033[00m Feb 20 04:56:01 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:01.582 2 INFO neutron.agent.securitygroups_rpc [None req-a4e037d0-314c-4de3-aad9-537a96cc703d 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:01 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:01.600 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:01 localhost nova_compute[280804]: 2026-02-20 09:56:01.613 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:01 localhost kernel: device tap4587225c-83 entered promiscuous mode Feb 20 04:56:01 localhost ovn_controller[155916]: 2026-02-20T09:56:01Z|00203|binding|INFO|Claiming lport 4587225c-836b-4484-a56a-56606fd1234a for this chassis. Feb 20 04:56:01 localhost ovn_controller[155916]: 2026-02-20T09:56:01Z|00204|binding|INFO|4587225c-836b-4484-a56a-56606fd1234a: Claiming unknown Feb 20 04:56:01 localhost NetworkManager[5967]: [1771581361.6245] manager: (tap4587225c-83): new Generic device (/org/freedesktop/NetworkManager/Devices/42) Feb 20 04:56:01 localhost nova_compute[280804]: 2026-02-20 09:56:01.623 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:01 localhost systemd-udevd[319784]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:56:01 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:01.633 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.3/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-22b9494b-a4bd-4626-ac7c-7b5aed75fd87', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22b9494b-a4bd-4626-ac7c-7b5aed75fd87', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8aa5b5a34cfe458d96fea87261361db1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a86f7daa-2233-4d07-8e7c-4f48ce1f7d52, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4587225c-836b-4484-a56a-56606fd1234a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:01 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:01.635 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 4587225c-836b-4484-a56a-56606fd1234a in datapath 22b9494b-a4bd-4626-ac7c-7b5aed75fd87 bound to our chassis#033[00m Feb 20 04:56:01 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:01.638 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port bb2691d9-6708-41e0-8070-a7e1fecaa75f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:56:01 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:01.639 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22b9494b-a4bd-4626-ac7c-7b5aed75fd87, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:01 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:01.640 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[20752afe-f6e4-43ab-8162-418bd4fbb171]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:01 localhost journal[229367]: ethtool ioctl error on tap4587225c-83: No such device Feb 20 04:56:01 localhost ovn_controller[155916]: 2026-02-20T09:56:01Z|00205|binding|INFO|Setting lport 4587225c-836b-4484-a56a-56606fd1234a ovn-installed in OVS Feb 20 04:56:01 localhost journal[229367]: ethtool ioctl error on tap4587225c-83: No such device Feb 20 04:56:01 localhost ovn_controller[155916]: 2026-02-20T09:56:01Z|00206|binding|INFO|Setting lport 4587225c-836b-4484-a56a-56606fd1234a up in Southbound Feb 20 04:56:01 localhost nova_compute[280804]: 2026-02-20 09:56:01.662 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:01 localhost journal[229367]: ethtool ioctl error on tap4587225c-83: No such device Feb 20 04:56:01 localhost journal[229367]: ethtool ioctl error on tap4587225c-83: No such device Feb 20 04:56:01 localhost journal[229367]: ethtool ioctl error on tap4587225c-83: No such device Feb 20 04:56:01 localhost journal[229367]: ethtool ioctl error on tap4587225c-83: No such device Feb 20 04:56:01 localhost journal[229367]: ethtool ioctl error on tap4587225c-83: No such device Feb 20 04:56:01 localhost journal[229367]: ethtool ioctl error on tap4587225c-83: No such device Feb 20 04:56:01 localhost nova_compute[280804]: 2026-02-20 09:56:01.705 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:01 localhost nova_compute[280804]: 2026-02-20 09:56:01.738 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.snap/da20d4b5-2abc-49bc-a78c-39ce3cdadf16/bbf0f863-043d-4da2-8b19-9df5a0284c2a' to b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/55ef06bf-1cf0-426b-9c3b-781d79ab0892' Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e1c014f1-0af0-4079-b9b5-c123fb6102a7/.meta.tmp' Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1c014f1-0af0-4079-b9b5-c123fb6102a7/.meta.tmp' to config b'/volumes/_nogroup/e1c014f1-0af0-4079-b9b5-c123fb6102a7/.meta' Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e1c014f1-0af0-4079-b9b5-c123fb6102a7, vol_name:cephfs) < "" Feb 20 04:56:02 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e1c014f1-0af0-4079-b9b5-c123fb6102a7", "format": "json"}]: dispatch Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e1c014f1-0af0-4079-b9b5-c123fb6102a7, vol_name:cephfs) < "" Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/.meta.tmp' Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/.meta.tmp' to config b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/.meta' Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e1c014f1-0af0-4079-b9b5-c123fb6102a7, vol_name:cephfs) < "" Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.clone_index] untracking 37fdf36b-6344-4ae3-b850-b833ae6925fb Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta.tmp' Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta.tmp' to config b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta' Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/.meta.tmp' Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/.meta.tmp' to config b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7/.meta' Feb 20 04:56:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, f9ac42b7-680c-41fc-8784-6176baa738f7) Feb 20 04:56:02 localhost nova_compute[280804]: 2026-02-20 09:56:02.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:56:02 localhost nova_compute[280804]: 2026-02-20 09:56:02.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:56:02 localhost nova_compute[280804]: 2026-02-20 09:56:02.510 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:56:02 localhost podman[319854]: Feb 20 04:56:02 localhost podman[319854]: 2026-02-20 09:56:02.653043962 +0000 UTC m=+0.095283023 container create 7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22b9494b-a4bd-4626-ac7c-7b5aed75fd87, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:56:02 localhost podman[319854]: 2026-02-20 09:56:02.606999678 +0000 UTC m=+0.049238769 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:56:02 localhost systemd[1]: Started libpod-conmon-7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4.scope. Feb 20 04:56:02 localhost systemd[1]: tmp-crun.pn22eT.mount: Deactivated successfully. Feb 20 04:56:02 localhost systemd[1]: Started libcrun container. Feb 20 04:56:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17fe402b864cf21b5a6b41dc6d8eaa6662ff8cf735b7b3cc1abc178b94e68963/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:56:02 localhost podman[319854]: 2026-02-20 09:56:02.73915342 +0000 UTC m=+0.181392481 container init 7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22b9494b-a4bd-4626-ac7c-7b5aed75fd87, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 20 04:56:02 localhost podman[319854]: 2026-02-20 09:56:02.745691294 +0000 UTC m=+0.187930355 container start 7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22b9494b-a4bd-4626-ac7c-7b5aed75fd87, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:56:02 localhost dnsmasq[319872]: started, version 2.85 cachesize 150 Feb 20 04:56:02 localhost dnsmasq[319872]: DNS service limited to local subnets Feb 20 04:56:02 localhost dnsmasq[319872]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:56:02 localhost dnsmasq[319872]: warning: no upstream servers configured Feb 20 04:56:02 localhost dnsmasq-dhcp[319872]: DHCP, static leases only on 10.102.0.0, lease time 1d Feb 20 04:56:02 localhost dnsmasq[319872]: read /var/lib/neutron/dhcp/22b9494b-a4bd-4626-ac7c-7b5aed75fd87/addn_hosts - 0 addresses Feb 20 04:56:02 localhost dnsmasq-dhcp[319872]: read /var/lib/neutron/dhcp/22b9494b-a4bd-4626-ac7c-7b5aed75fd87/host Feb 20 04:56:02 localhost dnsmasq-dhcp[319872]: read /var/lib/neutron/dhcp/22b9494b-a4bd-4626-ac7c-7b5aed75fd87/opts Feb 20 04:56:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:56:02 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:02.816 263745 INFO neutron.agent.dhcp.agent [None req-75c8b270-7de8-4899-b29e-b9b4b907322f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:01Z, description=, device_id=40707009-5dc5-44c2-8d25-acba20c2e4ac, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e604990b-54b6-48ed-bcfc-cc2069102208, ip_allocation=immediate, mac_address=fa:16:3e:d6:6f:77, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:57Z, description=, dns_domain=, id=22b9494b-a4bd-4626-ac7c-7b5aed75fd87, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-572988427, port_security_enabled=True, project_id=8aa5b5a34cfe458d96fea87261361db1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14986, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2223, status=ACTIVE, subnets=['55ceee1b-4b16-40cc-b5d0-43d8b4dc1d0f'], tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:55:59Z, vlan_transparent=None, network_id=22b9494b-a4bd-4626-ac7c-7b5aed75fd87, port_security_enabled=False, project_id=8aa5b5a34cfe458d96fea87261361db1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2232, status=DOWN, tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:56:01Z on network 22b9494b-a4bd-4626-ac7c-7b5aed75fd87#033[00m Feb 20 04:56:03 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:03.014 263745 INFO neutron.agent.dhcp.agent [None req-24be38b5-3687-4814-97a1-c45d2e652b12 - - - - - -] DHCP configuration for ports {'2cd6357b-5509-4da4-907f-59f8ce9ea061'} is completed#033[00m Feb 20 04:56:03 localhost nova_compute[280804]: 2026-02-20 09:56:03.043 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:03 localhost dnsmasq[319872]: read /var/lib/neutron/dhcp/22b9494b-a4bd-4626-ac7c-7b5aed75fd87/addn_hosts - 1 addresses Feb 20 04:56:03 localhost dnsmasq-dhcp[319872]: read /var/lib/neutron/dhcp/22b9494b-a4bd-4626-ac7c-7b5aed75fd87/host Feb 20 04:56:03 localhost dnsmasq-dhcp[319872]: read /var/lib/neutron/dhcp/22b9494b-a4bd-4626-ac7c-7b5aed75fd87/opts Feb 20 04:56:03 localhost podman[319890]: 2026-02-20 09:56:03.045826261 +0000 UTC m=+0.084683112 container kill 7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22b9494b-a4bd-4626-ac7c-7b5aed75fd87, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:56:03 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:03.260 263745 INFO neutron.agent.dhcp.agent [None req-33f012a9-3a74-4955-bcf6-cbb82564dd92 - - - - - -] DHCP configuration for ports {'e604990b-54b6-48ed-bcfc-cc2069102208'} is completed#033[00m Feb 20 04:56:03 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "e1c014f1-0af0-4079-b9b5-c123fb6102a7", "new_size": 2147483648, "format": "json"}]: dispatch Feb 20 04:56:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:e1c014f1-0af0-4079-b9b5-c123fb6102a7, vol_name:cephfs) < "" Feb 20 04:56:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:e1c014f1-0af0-4079-b9b5-c123fb6102a7, vol_name:cephfs) < "" Feb 20 04:56:03 localhost nova_compute[280804]: 2026-02-20 09:56:03.506 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:56:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 177 active+clean; 146 MiB data, 806 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 54 op/s Feb 20 04:56:03 localhost nova_compute[280804]: 2026-02-20 09:56:03.529 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:56:03 localhost nova_compute[280804]: 2026-02-20 09:56:03.553 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:56:03 localhost nova_compute[280804]: 2026-02-20 09:56:03.553 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:56:03 localhost nova_compute[280804]: 2026-02-20 09:56:03.554 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:56:03 localhost nova_compute[280804]: 2026-02-20 09:56:03.554 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:56:03 localhost nova_compute[280804]: 2026-02-20 09:56:03.555 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:56:03 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:03.756 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:56:03 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:03.903 2 INFO neutron.agent.securitygroups_rpc [None req-7d0bf5e0-9e1d-414c-8190-249e450828ca 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:56:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:56:04 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1963234622' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.019 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:56:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:04.201 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:04.203 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated#033[00m Feb 20 04:56:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:04.207 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:04 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:04.208 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa4bb27-ce9a-4813-9290-e187e19e7273]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.227 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.229 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11530MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.230 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.231 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.459 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.460 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.520 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Refreshing inventories for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.610 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Updating ProviderTree inventory for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.610 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Updating inventory in ProviderTree for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.627 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Refreshing aggregate associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.645 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Refreshing trait associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, traits: COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.664 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:56:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:56:04 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/462116900' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:56:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:56:04 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/462116900' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:56:04 localhost nova_compute[280804]: 2026-02-20 09:56:04.926 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:56:05 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/542996593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:56:05 localhost nova_compute[280804]: 2026-02-20 09:56:05.066 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:56:05 localhost nova_compute[280804]: 2026-02-20 09:56:05.073 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:56:05 localhost nova_compute[280804]: 2026-02-20 09:56:05.088 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:56:05 localhost nova_compute[280804]: 2026-02-20 09:56:05.090 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:56:05 localhost nova_compute[280804]: 2026-02-20 09:56:05.091 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:56:05 localhost nova_compute[280804]: 2026-02-20 09:56:05.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:56:05 localhost nova_compute[280804]: 2026-02-20 09:56:05.512 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:56:05 localhost nova_compute[280804]: 2026-02-20 09:56:05.513 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:56:05 localhost nova_compute[280804]: 2026-02-20 09:56:05.514 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 20 04:56:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 70 op/s Feb 20 04:56:05 localhost nova_compute[280804]: 2026-02-20 09:56:05.531 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 20 04:56:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:05.842 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:01Z, description=, device_id=40707009-5dc5-44c2-8d25-acba20c2e4ac, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e604990b-54b6-48ed-bcfc-cc2069102208, ip_allocation=immediate, mac_address=fa:16:3e:d6:6f:77, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:57Z, description=, dns_domain=, id=22b9494b-a4bd-4626-ac7c-7b5aed75fd87, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-572988427, port_security_enabled=True, project_id=8aa5b5a34cfe458d96fea87261361db1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14986, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2223, status=ACTIVE, subnets=['55ceee1b-4b16-40cc-b5d0-43d8b4dc1d0f'], tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:55:59Z, vlan_transparent=None, network_id=22b9494b-a4bd-4626-ac7c-7b5aed75fd87, port_security_enabled=False, project_id=8aa5b5a34cfe458d96fea87261361db1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2232, status=DOWN, tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:56:01Z on network 22b9494b-a4bd-4626-ac7c-7b5aed75fd87#033[00m Feb 20 04:56:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:05.921 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:56:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:05.922 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:56:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:05.922 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:56:06 localhost dnsmasq[319872]: read /var/lib/neutron/dhcp/22b9494b-a4bd-4626-ac7c-7b5aed75fd87/addn_hosts - 1 addresses Feb 20 04:56:06 localhost podman[319973]: 2026-02-20 09:56:06.053025739 +0000 UTC m=+0.058995598 container kill 7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22b9494b-a4bd-4626-ac7c-7b5aed75fd87, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 20 04:56:06 localhost dnsmasq-dhcp[319872]: read /var/lib/neutron/dhcp/22b9494b-a4bd-4626-ac7c-7b5aed75fd87/host Feb 20 04:56:06 localhost dnsmasq-dhcp[319872]: read /var/lib/neutron/dhcp/22b9494b-a4bd-4626-ac7c-7b5aed75fd87/opts Feb 20 04:56:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:56:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:56:06 localhost podman[319986]: 2026-02-20 09:56:06.170044859 +0000 UTC m=+0.089686174 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., architecture=x86_64, version=9.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, release=1770267347, vcs-type=git, io.buildah.version=1.33.7, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 04:56:06 localhost podman[319986]: 2026-02-20 09:56:06.186182189 +0000 UTC m=+0.105823464 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, distribution-scope=public, version=9.7, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7) Feb 20 04:56:06 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:56:06 localhost systemd[1]: tmp-crun.CyADxx.mount: Deactivated successfully. Feb 20 04:56:06 localhost podman[319987]: 2026-02-20 09:56:06.284953353 +0000 UTC m=+0.203123409 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Feb 20 04:56:06 localhost podman[319987]: 2026-02-20 09:56:06.299074498 +0000 UTC m=+0.217244594 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:56:06 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:56:06 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:06.370 263745 INFO neutron.agent.dhcp.agent [None req-ea5e6a70-f29b-46b0-83ca-933687bf0190 - - - - - -] DHCP configuration for ports {'e604990b-54b6-48ed-bcfc-cc2069102208'} is completed#033[00m Feb 20 04:56:06 localhost nova_compute[280804]: 2026-02-20 09:56:06.529 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:56:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e1c014f1-0af0-4079-b9b5-c123fb6102a7", "format": "json"}]: dispatch Feb 20 04:56:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e1c014f1-0af0-4079-b9b5-c123fb6102a7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e1c014f1-0af0-4079-b9b5-c123fb6102a7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:06 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:56:06.687+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e1c014f1-0af0-4079-b9b5-c123fb6102a7' of type subvolume Feb 20 04:56:06 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e1c014f1-0af0-4079-b9b5-c123fb6102a7' of type subvolume Feb 20 04:56:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e1c014f1-0af0-4079-b9b5-c123fb6102a7", "force": true, "format": "json"}]: dispatch Feb 20 04:56:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e1c014f1-0af0-4079-b9b5-c123fb6102a7, vol_name:cephfs) < "" Feb 20 04:56:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e1c014f1-0af0-4079-b9b5-c123fb6102a7'' moved to trashcan Feb 20 04:56:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:56:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e1c014f1-0af0-4079-b9b5-c123fb6102a7, vol_name:cephfs) < "" Feb 20 04:56:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v296: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 25 KiB/s wr, 34 op/s Feb 20 04:56:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:56:07 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:07.897 2 INFO neutron.agent.securitygroups_rpc [None req-8965acb2-2b16-4d89-a227-154eee5fe38f 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:08 localhost nova_compute[280804]: 2026-02-20 09:56:08.077 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:08 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:08.381 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.3 2001:db8::f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:08 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:08.382 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated#033[00m Feb 20 04:56:08 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:08.386 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:08 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:08.387 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[e94e6b10-a4ca-49ad-ab50-1d9e8cf6b261]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:08 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:08.684 2 INFO neutron.agent.securitygroups_rpc [None req-d51c801f-c66b-4697-8723-78081587d201 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:08 localhost sshd[320031]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:56:09 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:09.498 2 INFO neutron.agent.securitygroups_rpc [None req-166aeacb-5366-40db-a13d-35c7cc5a7a14 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 31 KiB/s wr, 47 op/s Feb 20 04:56:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ec3b06e0-45bf-4e34-99f4-1c4bd5601e66", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:56:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec3b06e0-45bf-4e34-99f4-1c4bd5601e66, vol_name:cephfs) < "" Feb 20 04:56:09 localhost nova_compute[280804]: 2026-02-20 09:56:09.973 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:10 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ec3b06e0-45bf-4e34-99f4-1c4bd5601e66/.meta.tmp' Feb 20 04:56:10 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ec3b06e0-45bf-4e34-99f4-1c4bd5601e66/.meta.tmp' to config b'/volumes/_nogroup/ec3b06e0-45bf-4e34-99f4-1c4bd5601e66/.meta' Feb 20 04:56:10 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec3b06e0-45bf-4e34-99f4-1c4bd5601e66, vol_name:cephfs) < "" Feb 20 04:56:10 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ec3b06e0-45bf-4e34-99f4-1c4bd5601e66", "format": "json"}]: dispatch Feb 20 04:56:10 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec3b06e0-45bf-4e34-99f4-1c4bd5601e66, vol_name:cephfs) < "" Feb 20 04:56:10 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:10.027 263745 INFO neutron.agent.linux.ip_lib [None req-11c81d5c-b0b8-457a-a618-3c6bdc8d7b2b - - - - - -] Device tap43083585-4d cannot be used as it has no MAC address#033[00m Feb 20 04:56:10 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec3b06e0-45bf-4e34-99f4-1c4bd5601e66, vol_name:cephfs) < "" Feb 20 04:56:10 localhost nova_compute[280804]: 2026-02-20 09:56:10.051 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:10 localhost kernel: device tap43083585-4d entered promiscuous mode Feb 20 04:56:10 localhost NetworkManager[5967]: [1771581370.0612] manager: (tap43083585-4d): new Generic device (/org/freedesktop/NetworkManager/Devices/43) Feb 20 04:56:10 localhost nova_compute[280804]: 2026-02-20 09:56:10.061 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:10 localhost ovn_controller[155916]: 2026-02-20T09:56:10Z|00207|binding|INFO|Claiming lport 43083585-4dda-4941-b87a-0b2777b6c844 for this chassis. Feb 20 04:56:10 localhost ovn_controller[155916]: 2026-02-20T09:56:10Z|00208|binding|INFO|43083585-4dda-4941-b87a-0b2777b6c844: Claiming unknown Feb 20 04:56:10 localhost systemd-udevd[320043]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:56:10 localhost journal[229367]: ethtool ioctl error on tap43083585-4d: No such device Feb 20 04:56:10 localhost ovn_controller[155916]: 2026-02-20T09:56:10Z|00209|binding|INFO|Setting lport 43083585-4dda-4941-b87a-0b2777b6c844 ovn-installed in OVS Feb 20 04:56:10 localhost nova_compute[280804]: 2026-02-20 09:56:10.100 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:10 localhost journal[229367]: ethtool ioctl error on tap43083585-4d: No such device Feb 20 04:56:10 localhost journal[229367]: ethtool ioctl error on tap43083585-4d: No such device Feb 20 04:56:10 localhost journal[229367]: ethtool ioctl error on tap43083585-4d: No such device Feb 20 04:56:10 localhost journal[229367]: ethtool ioctl error on tap43083585-4d: No such device Feb 20 04:56:10 localhost journal[229367]: ethtool ioctl error on tap43083585-4d: No such device Feb 20 04:56:10 localhost journal[229367]: ethtool ioctl error on tap43083585-4d: No such device Feb 20 04:56:10 localhost nova_compute[280804]: 2026-02-20 09:56:10.130 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:10 localhost journal[229367]: ethtool ioctl error on tap43083585-4d: No such device Feb 20 04:56:10 localhost ovn_controller[155916]: 2026-02-20T09:56:10Z|00210|binding|INFO|Setting lport 43083585-4dda-4941-b87a-0b2777b6c844 up in Southbound Feb 20 04:56:10 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:10.142 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.103.0.3/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-40a71fcf-409f-48bb-8490-446a998be9a8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40a71fcf-409f-48bb-8490-446a998be9a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8aa5b5a34cfe458d96fea87261361db1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc2bce9a-0b82-42ab-bd97-2e9b2ca6955a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=43083585-4dda-4941-b87a-0b2777b6c844) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:10 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:10.144 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 43083585-4dda-4941-b87a-0b2777b6c844 in datapath 40a71fcf-409f-48bb-8490-446a998be9a8 bound to our chassis#033[00m Feb 20 04:56:10 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:10.148 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port d2dddfbf-0a96-47f4-a9f6-b1e236a3de28 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:56:10 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:10.148 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40a71fcf-409f-48bb-8490-446a998be9a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:10 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:10.149 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[0f328391-657f-45de-89ca-5770e26e3fa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:10 localhost nova_compute[280804]: 2026-02-20 09:56:10.162 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:10 localhost nova_compute[280804]: 2026-02-20 09:56:10.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:56:11 localhost podman[320114]: Feb 20 04:56:11 localhost podman[320114]: 2026-02-20 09:56:11.012876432 +0000 UTC m=+0.093710652 container create aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40a71fcf-409f-48bb-8490-446a998be9a8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:56:11 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:11.013 2 INFO neutron.agent.securitygroups_rpc [None req-7e54c9a7-f5f5-46c1-ae1b-688f8acab697 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['46f15231-c0dd-46d4-9abc-adba5985e75b']#033[00m Feb 20 04:56:11 localhost systemd[1]: Started libpod-conmon-aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0.scope. Feb 20 04:56:11 localhost podman[320114]: 2026-02-20 09:56:10.969869169 +0000 UTC m=+0.050703439 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:56:11 localhost systemd[1]: Started libcrun container. Feb 20 04:56:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7714af4cff06d407034e74672352eca2b4f45cdfca745c422d2d589b71924f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:56:11 localhost podman[320114]: 2026-02-20 09:56:11.098678482 +0000 UTC m=+0.179512712 container init aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40a71fcf-409f-48bb-8490-446a998be9a8, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Feb 20 04:56:11 localhost podman[320114]: 2026-02-20 09:56:11.108521164 +0000 UTC m=+0.189355394 container start aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40a71fcf-409f-48bb-8490-446a998be9a8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:56:11 localhost dnsmasq[320133]: started, version 2.85 cachesize 150 Feb 20 04:56:11 localhost dnsmasq[320133]: DNS service limited to local subnets Feb 20 04:56:11 localhost dnsmasq[320133]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:56:11 localhost dnsmasq[320133]: warning: no upstream servers configured Feb 20 04:56:11 localhost dnsmasq-dhcp[320133]: DHCP, static leases only on 10.103.0.0, lease time 1d Feb 20 04:56:11 localhost dnsmasq[320133]: read /var/lib/neutron/dhcp/40a71fcf-409f-48bb-8490-446a998be9a8/addn_hosts - 0 addresses Feb 20 04:56:11 localhost dnsmasq-dhcp[320133]: read /var/lib/neutron/dhcp/40a71fcf-409f-48bb-8490-446a998be9a8/host Feb 20 04:56:11 localhost dnsmasq-dhcp[320133]: read /var/lib/neutron/dhcp/40a71fcf-409f-48bb-8490-446a998be9a8/opts Feb 20 04:56:11 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:11.175 263745 INFO neutron.agent.dhcp.agent [None req-d202c761-435b-4c90-9fde-b2fc7e4ea0c8 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:09Z, description=, device_id=40707009-5dc5-44c2-8d25-acba20c2e4ac, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=88759385-c61e-4f58-89ab-450cafdee77c, ip_allocation=immediate, mac_address=fa:16:3e:37:b7:28, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:56:06Z, description=, dns_domain=, id=40a71fcf-409f-48bb-8490-446a998be9a8, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-77827477, port_security_enabled=True, project_id=8aa5b5a34cfe458d96fea87261361db1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48866, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2240, status=ACTIVE, subnets=['cc33c58a-f879-48d7-931e-cb419c5d410c'], tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:56:08Z, vlan_transparent=None, network_id=40a71fcf-409f-48bb-8490-446a998be9a8, port_security_enabled=False, project_id=8aa5b5a34cfe458d96fea87261361db1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2262, status=DOWN, tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:56:09Z on network 40a71fcf-409f-48bb-8490-446a998be9a8#033[00m Feb 20 04:56:11 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:11.270 263745 INFO neutron.agent.dhcp.agent [None req-93f0861e-c4d0-495b-ba74-b8781caa8fd2 - - - - - -] DHCP configuration for ports {'7ff19f9f-4fb5-47ad-b87e-c62945d72d06'} is completed#033[00m Feb 20 04:56:11 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:11.326 2 INFO neutron.agent.securitygroups_rpc [None req-4b1a20ca-0949-416f-91ae-525739a1e77a f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:11 localhost dnsmasq[320133]: read /var/lib/neutron/dhcp/40a71fcf-409f-48bb-8490-446a998be9a8/addn_hosts - 1 addresses Feb 20 04:56:11 localhost dnsmasq-dhcp[320133]: read /var/lib/neutron/dhcp/40a71fcf-409f-48bb-8490-446a998be9a8/host Feb 20 04:56:11 localhost dnsmasq-dhcp[320133]: read /var/lib/neutron/dhcp/40a71fcf-409f-48bb-8490-446a998be9a8/opts Feb 20 04:56:11 localhost podman[320151]: 2026-02-20 09:56:11.428199439 +0000 UTC m=+0.068863311 container kill aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40a71fcf-409f-48bb-8490-446a998be9a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:56:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 146 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 20 KiB/s wr, 38 op/s Feb 20 04:56:11 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:11.734 263745 INFO neutron.agent.dhcp.agent [None req-5e2c0cf9-80d0-4f51-8dde-c3cb08307e82 - - - - - -] DHCP configuration for ports {'88759385-c61e-4f58-89ab-450cafdee77c'} is completed#033[00m Feb 20 04:56:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:56:13 localhost nova_compute[280804]: 2026-02-20 09:56:13.114 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:13 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ec3b06e0-45bf-4e34-99f4-1c4bd5601e66", "new_size": 2147483648, "format": "json"}]: dispatch Feb 20 04:56:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:ec3b06e0-45bf-4e34-99f4-1c4bd5601e66, vol_name:cephfs) < "" Feb 20 04:56:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:ec3b06e0-45bf-4e34-99f4-1c4bd5601e66, vol_name:cephfs) < "" Feb 20 04:56:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 146 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 33 op/s Feb 20 04:56:13 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:13.558 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:09Z, description=, device_id=40707009-5dc5-44c2-8d25-acba20c2e4ac, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=88759385-c61e-4f58-89ab-450cafdee77c, ip_allocation=immediate, mac_address=fa:16:3e:37:b7:28, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:56:06Z, description=, dns_domain=, id=40a71fcf-409f-48bb-8490-446a998be9a8, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-77827477, port_security_enabled=True, project_id=8aa5b5a34cfe458d96fea87261361db1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48866, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2240, status=ACTIVE, subnets=['cc33c58a-f879-48d7-931e-cb419c5d410c'], tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:56:08Z, vlan_transparent=None, network_id=40a71fcf-409f-48bb-8490-446a998be9a8, port_security_enabled=False, project_id=8aa5b5a34cfe458d96fea87261361db1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2262, status=DOWN, tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:56:09Z on network 40a71fcf-409f-48bb-8490-446a998be9a8#033[00m Feb 20 04:56:13 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:13.743 2 INFO neutron.agent.securitygroups_rpc [None req-93b1773d-c2eb-4652-8e8d-0c460cd5364e 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['46f15231-c0dd-46d4-9abc-adba5985e75b', '446482cb-8c18-450e-acf7-2fbe583929b8']#033[00m Feb 20 04:56:13 localhost dnsmasq[320133]: read /var/lib/neutron/dhcp/40a71fcf-409f-48bb-8490-446a998be9a8/addn_hosts - 1 addresses Feb 20 04:56:13 localhost dnsmasq-dhcp[320133]: read /var/lib/neutron/dhcp/40a71fcf-409f-48bb-8490-446a998be9a8/host Feb 20 04:56:13 localhost podman[320190]: 2026-02-20 09:56:13.812857103 +0000 UTC m=+0.062183643 container kill aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40a71fcf-409f-48bb-8490-446a998be9a8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 20 04:56:13 localhost dnsmasq-dhcp[320133]: read /var/lib/neutron/dhcp/40a71fcf-409f-48bb-8490-446a998be9a8/opts Feb 20 04:56:14 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:14.215 263745 INFO neutron.agent.dhcp.agent [None req-d4b7e366-8583-4150-a0e2-02d75c810dca - - - - - -] DHCP configuration for ports {'88759385-c61e-4f58-89ab-450cafdee77c'} is completed#033[00m Feb 20 04:56:14 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:14.365 2 INFO neutron.agent.securitygroups_rpc [None req-6fbfa532-f4c6-42e9-b707-63e0a42ce0d3 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['446482cb-8c18-450e-acf7-2fbe583929b8']#033[00m Feb 20 04:56:14 localhost nova_compute[280804]: 2026-02-20 09:56:14.975 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:15.066 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.3 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:15.067 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated#033[00m Feb 20 04:56:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:15.071 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:15.072 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba32e13-e5ee-4814-8a23-cce51f10026e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:56:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:56:15 localhost podman[320210]: 2026-02-20 09:56:15.446582901 +0000 UTC m=+0.079084113 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Feb 20 04:56:15 localhost podman[320210]: 2026-02-20 09:56:15.49017944 +0000 UTC m=+0.122680652 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:56:15 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:56:15 localhost podman[320211]: 2026-02-20 09:56:15.506912445 +0000 UTC m=+0.135408680 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:56:15 localhost podman[320211]: 2026-02-20 09:56:15.510923251 +0000 UTC m=+0.139419476 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:56:15 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:56:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 146 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 26 KiB/s wr, 35 op/s Feb 20 04:56:16 localhost podman[241347]: time="2026-02-20T09:56:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:56:16 localhost podman[241347]: @ - - [20/Feb/2026:09:56:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 163187 "" "Go-http-client/1.1" Feb 20 04:56:16 localhost podman[241347]: @ - - [20/Feb/2026:09:56:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20191 "" "Go-http-client/1.1" Feb 20 04:56:16 localhost sshd[320254]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:56:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ec3b06e0-45bf-4e34-99f4-1c4bd5601e66", "format": "json"}]: dispatch Feb 20 04:56:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ec3b06e0-45bf-4e34-99f4-1c4bd5601e66, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ec3b06e0-45bf-4e34-99f4-1c4bd5601e66, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:16 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec3b06e0-45bf-4e34-99f4-1c4bd5601e66' of type subvolume Feb 20 04:56:16 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:56:16.549+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec3b06e0-45bf-4e34-99f4-1c4bd5601e66' of type subvolume Feb 20 04:56:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ec3b06e0-45bf-4e34-99f4-1c4bd5601e66", "force": true, "format": "json"}]: dispatch Feb 20 04:56:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec3b06e0-45bf-4e34-99f4-1c4bd5601e66, vol_name:cephfs) < "" Feb 20 04:56:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ec3b06e0-45bf-4e34-99f4-1c4bd5601e66'' moved to trashcan Feb 20 04:56:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:56:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec3b06e0-45bf-4e34-99f4-1c4bd5601e66, vol_name:cephfs) < "" Feb 20 04:56:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 146 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 13 KiB/s wr, 18 op/s Feb 20 04:56:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:56:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:56:17 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2119990363' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:56:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:56:17 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2119990363' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:56:18 localhost nova_compute[280804]: 2026-02-20 09:56:18.149 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:56:18 localhost podman[320256]: 2026-02-20 09:56:18.44586351 +0000 UTC m=+0.079706669 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:56:18 localhost podman[320256]: 2026-02-20 09:56:18.461171426 +0000 UTC m=+0.095014535 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:56:18 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:56:19 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:19.247 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2 2001:db8::f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:19 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:19.249 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated#033[00m Feb 20 04:56:19 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:19.252 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:19 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:19.253 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[e90874c2-1421-4332-a0c7-f68fbcc1ba4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 146 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 20 op/s Feb 20 04:56:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "15ed10cc-a478-4127-a35f-dcce0e8ec529", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:56:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:15ed10cc-a478-4127-a35f-dcce0e8ec529, vol_name:cephfs) < "" Feb 20 04:56:19 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/15ed10cc-a478-4127-a35f-dcce0e8ec529/.meta.tmp' Feb 20 04:56:19 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/15ed10cc-a478-4127-a35f-dcce0e8ec529/.meta.tmp' to config b'/volumes/_nogroup/15ed10cc-a478-4127-a35f-dcce0e8ec529/.meta' Feb 20 04:56:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:15ed10cc-a478-4127-a35f-dcce0e8ec529, vol_name:cephfs) < "" Feb 20 04:56:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "15ed10cc-a478-4127-a35f-dcce0e8ec529", "format": "json"}]: dispatch Feb 20 04:56:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:15ed10cc-a478-4127-a35f-dcce0e8ec529, vol_name:cephfs) < "" Feb 20 04:56:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:15ed10cc-a478-4127-a35f-dcce0e8ec529, vol_name:cephfs) < "" Feb 20 04:56:19 localhost nova_compute[280804]: 2026-02-20 09:56:19.979 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:20 localhost systemd[1]: tmp-crun.3HO2C0.mount: Deactivated successfully. Feb 20 04:56:20 localhost dnsmasq[320133]: read /var/lib/neutron/dhcp/40a71fcf-409f-48bb-8490-446a998be9a8/addn_hosts - 0 addresses Feb 20 04:56:20 localhost podman[320295]: 2026-02-20 09:56:20.181011742 +0000 UTC m=+0.075245940 container kill aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40a71fcf-409f-48bb-8490-446a998be9a8, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:56:20 localhost dnsmasq-dhcp[320133]: read /var/lib/neutron/dhcp/40a71fcf-409f-48bb-8490-446a998be9a8/host Feb 20 04:56:20 localhost dnsmasq-dhcp[320133]: read /var/lib/neutron/dhcp/40a71fcf-409f-48bb-8490-446a998be9a8/opts Feb 20 04:56:20 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:20.352 2 INFO neutron.agent.securitygroups_rpc [None req-27e863e6-abb7-4d79-8929-35ee419d3ab5 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:20 localhost nova_compute[280804]: 2026-02-20 09:56:20.431 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:20 localhost ovn_controller[155916]: 2026-02-20T09:56:20Z|00211|binding|INFO|Releasing lport 43083585-4dda-4941-b87a-0b2777b6c844 from this chassis (sb_readonly=0) Feb 20 04:56:20 localhost kernel: device tap43083585-4d left promiscuous mode Feb 20 04:56:20 localhost ovn_controller[155916]: 2026-02-20T09:56:20Z|00212|binding|INFO|Setting lport 43083585-4dda-4941-b87a-0b2777b6c844 down in Southbound Feb 20 04:56:20 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:20.439 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.103.0.3/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-40a71fcf-409f-48bb-8490-446a998be9a8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40a71fcf-409f-48bb-8490-446a998be9a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8aa5b5a34cfe458d96fea87261361db1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc2bce9a-0b82-42ab-bd97-2e9b2ca6955a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=43083585-4dda-4941-b87a-0b2777b6c844) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:20 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:20.441 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 43083585-4dda-4941-b87a-0b2777b6c844 in datapath 40a71fcf-409f-48bb-8490-446a998be9a8 unbound from our chassis#033[00m Feb 20 04:56:20 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:20.444 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40a71fcf-409f-48bb-8490-446a998be9a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:20 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:20.445 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[bb1a3c46-8afa-4681-bbba-d1ea1d7cec5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:20 localhost nova_compute[280804]: 2026-02-20 09:56:20.452 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:20 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:20.845 2 INFO neutron.agent.securitygroups_rpc [None req-5bc16860-c455-4be4-9017-f7ba050a5b1d f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:20 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:20.948 2 INFO neutron.agent.securitygroups_rpc [None req-3d50957a-c50d-404e-a697-bd588426aa5b 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['9c894fef-e625-4d2d-ad79-9f0215b19661']#033[00m Feb 20 04:56:21 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:21.135 2 INFO neutron.agent.securitygroups_rpc [None req-ab2c767f-db90-4059-9416-3c9c50626a18 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:21 localhost dnsmasq[320133]: exiting on receipt of SIGTERM Feb 20 04:56:21 localhost podman[320335]: 2026-02-20 09:56:21.484237467 +0000 UTC m=+0.045713926 container kill aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40a71fcf-409f-48bb-8490-446a998be9a8, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Feb 20 04:56:21 localhost systemd[1]: libpod-aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0.scope: Deactivated successfully. Feb 20 04:56:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 12 KiB/s wr, 21 op/s Feb 20 04:56:21 localhost podman[320350]: 2026-02-20 09:56:21.543484832 +0000 UTC m=+0.041347170 container died aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40a71fcf-409f-48bb-8490-446a998be9a8, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:56:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0-userdata-shm.mount: Deactivated successfully. Feb 20 04:56:21 localhost systemd[1]: var-lib-containers-storage-overlay-d7714af4cff06d407034e74672352eca2b4f45cdfca745c422d2d589b71924f9-merged.mount: Deactivated successfully. Feb 20 04:56:21 localhost podman[320350]: 2026-02-20 09:56:21.585707204 +0000 UTC m=+0.083569492 container remove aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40a71fcf-409f-48bb-8490-446a998be9a8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:56:21 localhost systemd[1]: libpod-conmon-aa6367778540a37f8ce062f71a8611825609e10d6ae233b9ff04ff4dbfc73bb0.scope: Deactivated successfully. Feb 20 04:56:21 localhost systemd[1]: run-netns-qdhcp\x2d40a71fcf\x2d409f\x2d48bb\x2d8490\x2d446a998be9a8.mount: Deactivated successfully. Feb 20 04:56:21 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:21.809 263745 INFO neutron.agent.dhcp.agent [None req-2f16d550-fd51-4bde-8f6f-c129c3f39d68 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:21 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:21.811 263745 INFO neutron.agent.dhcp.agent [None req-2f16d550-fd51-4bde-8f6f-c129c3f39d68 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:21 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:21.823 2 INFO neutron.agent.securitygroups_rpc [None req-325d197d-f2bb-472d-a6df-be02729b4a1c 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:21 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:21.865 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:22 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:22.608 2 INFO neutron.agent.securitygroups_rpc [None req-521cfad5-05c2-4b59-9313-296ec36811c0 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:56:22 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:22.817 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:23 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "15ed10cc-a478-4127-a35f-dcce0e8ec529", "format": "json"}]: dispatch Feb 20 04:56:23 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:15ed10cc-a478-4127-a35f-dcce0e8ec529, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:23 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:15ed10cc-a478-4127-a35f-dcce0e8ec529, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:23 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '15ed10cc-a478-4127-a35f-dcce0e8ec529' of type subvolume Feb 20 04:56:23 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:56:23.071+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '15ed10cc-a478-4127-a35f-dcce0e8ec529' of type subvolume Feb 20 04:56:23 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "15ed10cc-a478-4127-a35f-dcce0e8ec529", "force": true, "format": "json"}]: dispatch Feb 20 04:56:23 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:15ed10cc-a478-4127-a35f-dcce0e8ec529, vol_name:cephfs) < "" Feb 20 04:56:23 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/15ed10cc-a478-4127-a35f-dcce0e8ec529'' moved to trashcan Feb 20 04:56:23 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:56:23 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:15ed10cc-a478-4127-a35f-dcce0e8ec529, vol_name:cephfs) < "" Feb 20 04:56:23 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:23.116 2 INFO neutron.agent.securitygroups_rpc [None req-264095f7-8549-4a1d-9c14-cf140323ad0c 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:23 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:23.139 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:23 localhost nova_compute[280804]: 2026-02-20 09:56:23.184 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:23 localhost nova_compute[280804]: 2026-02-20 09:56:23.341 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:56:23 Feb 20 04:56:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:56:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 04:56:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['backups', 'volumes', 'manila_data', 'manila_metadata', 'vms', '.mgr', 'images'] Feb 20 04:56:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 04:56:23 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:23.458 2 INFO neutron.agent.securitygroups_rpc [None req-8e5e38ae-f36c-4a7a-929b-4d665cde8908 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:56:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:56:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:56:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:56:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:56:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:56:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 17 op/s Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.0001633056776940257 quantized to 32 (current 32) Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.1813988926112042e-06 of space, bias 1.0, pg target 0.00023509837962962962 quantized to 32 (current 32) Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:56:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 5.207242811278615e-05 of space, bias 4.0, pg target 0.041449652777777776 quantized to 16 (current 16) Feb 20 04:56:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:56:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:56:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:56:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:56:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:56:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:56:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:56:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:56:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:56:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:56:24 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:24.671 2 INFO neutron.agent.securitygroups_rpc [None req-7c70538f-1d84-485c-beb6-53999b2ce1d2 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['9c894fef-e625-4d2d-ad79-9f0215b19661', '6e36724b-9ab8-4bfe-9f74-069d82055697', '5fe0aa03-55bd-43ef-a38b-499c4a5e8b30']#033[00m Feb 20 04:56:25 localhost nova_compute[280804]: 2026-02-20 09:56:25.084 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:25 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:25.116 2 INFO neutron.agent.securitygroups_rpc [None req-22f50294-5f51-4ab3-8b7c-31c2f02c0d3d 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:25 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:25.167 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:25 localhost dnsmasq[319872]: read /var/lib/neutron/dhcp/22b9494b-a4bd-4626-ac7c-7b5aed75fd87/addn_hosts - 0 addresses Feb 20 04:56:25 localhost podman[320393]: 2026-02-20 09:56:25.349455768 +0000 UTC m=+0.059122542 container kill 7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22b9494b-a4bd-4626-ac7c-7b5aed75fd87, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:56:25 localhost dnsmasq-dhcp[319872]: read /var/lib/neutron/dhcp/22b9494b-a4bd-4626-ac7c-7b5aed75fd87/host Feb 20 04:56:25 localhost dnsmasq-dhcp[319872]: read /var/lib/neutron/dhcp/22b9494b-a4bd-4626-ac7c-7b5aed75fd87/opts Feb 20 04:56:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v305: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 21 KiB/s wr, 32 op/s Feb 20 04:56:25 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:25.789 2 INFO neutron.agent.securitygroups_rpc [None req-f3d891d9-b12f-41a6-9c43-2a59a14444d4 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['6e36724b-9ab8-4bfe-9f74-069d82055697', '5fe0aa03-55bd-43ef-a38b-499c4a5e8b30']#033[00m Feb 20 04:56:25 localhost nova_compute[280804]: 2026-02-20 09:56:25.904 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:25 localhost ovn_controller[155916]: 2026-02-20T09:56:25Z|00213|binding|INFO|Releasing lport 4587225c-836b-4484-a56a-56606fd1234a from this chassis (sb_readonly=0) Feb 20 04:56:25 localhost ovn_controller[155916]: 2026-02-20T09:56:25Z|00214|binding|INFO|Setting lport 4587225c-836b-4484-a56a-56606fd1234a down in Southbound Feb 20 04:56:25 localhost kernel: device tap4587225c-83 left promiscuous mode Feb 20 04:56:25 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:25.913 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.3/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-22b9494b-a4bd-4626-ac7c-7b5aed75fd87', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22b9494b-a4bd-4626-ac7c-7b5aed75fd87', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8aa5b5a34cfe458d96fea87261361db1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a86f7daa-2233-4d07-8e7c-4f48ce1f7d52, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4587225c-836b-4484-a56a-56606fd1234a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:25 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:25.915 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 4587225c-836b-4484-a56a-56606fd1234a in datapath 22b9494b-a4bd-4626-ac7c-7b5aed75fd87 unbound from our chassis#033[00m Feb 20 04:56:25 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:25.917 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22b9494b-a4bd-4626-ac7c-7b5aed75fd87, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:25 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:25.918 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[36fb2740-dbf6-407b-ace7-b10f390864a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:25 localhost nova_compute[280804]: 2026-02-20 09:56:25.928 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:26 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "da4b4a91-1d53-4396-80dd-4a8521885337", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:56:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:da4b4a91-1d53-4396-80dd-4a8521885337, vol_name:cephfs) < "" Feb 20 04:56:26 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/da4b4a91-1d53-4396-80dd-4a8521885337/.meta.tmp' Feb 20 04:56:26 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/da4b4a91-1d53-4396-80dd-4a8521885337/.meta.tmp' to config b'/volumes/_nogroup/da4b4a91-1d53-4396-80dd-4a8521885337/.meta' Feb 20 04:56:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:da4b4a91-1d53-4396-80dd-4a8521885337, vol_name:cephfs) < "" Feb 20 04:56:26 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "da4b4a91-1d53-4396-80dd-4a8521885337", "format": "json"}]: dispatch Feb 20 04:56:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:da4b4a91-1d53-4396-80dd-4a8521885337, vol_name:cephfs) < "" Feb 20 04:56:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:da4b4a91-1d53-4396-80dd-4a8521885337, vol_name:cephfs) < "" Feb 20 04:56:26 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:26.627 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 2001:db8::f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:26 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:26.629 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated#033[00m Feb 20 04:56:26 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:26.633 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:26 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:26.634 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbd0a44-7924-4398-8220-58e949902b0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:26 localhost dnsmasq[319872]: exiting on receipt of SIGTERM Feb 20 04:56:26 localhost podman[320434]: 2026-02-20 09:56:26.74852264 +0000 UTC m=+0.062327187 container kill 7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22b9494b-a4bd-4626-ac7c-7b5aed75fd87, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Feb 20 04:56:26 localhost systemd[1]: libpod-7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4.scope: Deactivated successfully. Feb 20 04:56:26 localhost podman[320449]: 2026-02-20 09:56:26.831655509 +0000 UTC m=+0.062537213 container died 7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22b9494b-a4bd-4626-ac7c-7b5aed75fd87, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:56:26 localhost systemd[1]: tmp-crun.rCmGZB.mount: Deactivated successfully. Feb 20 04:56:26 localhost podman[320449]: 2026-02-20 09:56:26.865597151 +0000 UTC m=+0.096478805 container cleanup 7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22b9494b-a4bd-4626-ac7c-7b5aed75fd87, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:56:26 localhost systemd[1]: libpod-conmon-7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4.scope: Deactivated successfully. Feb 20 04:56:26 localhost podman[320450]: 2026-02-20 09:56:26.908035219 +0000 UTC m=+0.131777903 container remove 7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22b9494b-a4bd-4626-ac7c-7b5aed75fd87, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:56:27 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:27.322 263745 INFO neutron.agent.dhcp.agent [None req-f11a904b-b7fd-4856-8b28-3e666afa2d39 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:27 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:27.323 263745 INFO neutron.agent.dhcp.agent [None req-f11a904b-b7fd-4856-8b28-3e666afa2d39 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:56:27 localhost podman[320478]: 2026-02-20 09:56:27.442922074 +0000 UTC m=+0.078732413 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:56:27 localhost podman[320478]: 2026-02-20 09:56:27.452315754 +0000 UTC m=+0.088126063 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:56:27 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:56:27 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:27.517 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 31 op/s Feb 20 04:56:27 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:27.558 2 INFO neutron.agent.securitygroups_rpc [None req-a6f56626-9080-4b48-8909-d5cbdaffd977 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:27 localhost systemd[1]: var-lib-containers-storage-overlay-17fe402b864cf21b5a6b41dc6d8eaa6662ff8cf735b7b3cc1abc178b94e68963-merged.mount: Deactivated successfully. Feb 20 04:56:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7180fb70bfe44957ad30038bd25fe46ae9e37d506c346fee5e064e5babf306c4-userdata-shm.mount: Deactivated successfully. Feb 20 04:56:27 localhost systemd[1]: run-netns-qdhcp\x2d22b9494b\x2da4bd\x2d4626\x2dac7c\x2d7b5aed75fd87.mount: Deactivated successfully. Feb 20 04:56:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:56:27 localhost nova_compute[280804]: 2026-02-20 09:56:27.985 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:28 localhost openstack_network_exporter[243776]: ERROR 09:56:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:56:28 localhost openstack_network_exporter[243776]: Feb 20 04:56:28 localhost openstack_network_exporter[243776]: ERROR 09:56:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:56:28 localhost openstack_network_exporter[243776]: Feb 20 04:56:28 localhost nova_compute[280804]: 2026-02-20 09:56:28.186 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:28 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:28.276 263745 INFO neutron.agent.linux.ip_lib [None req-537e3203-edc1-4141-8dd5-0f991dc9efe3 - - - - - -] Device tapc4bc1257-24 cannot be used as it has no MAC address#033[00m Feb 20 04:56:28 localhost nova_compute[280804]: 2026-02-20 09:56:28.300 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:28 localhost kernel: device tapc4bc1257-24 entered promiscuous mode Feb 20 04:56:28 localhost NetworkManager[5967]: [1771581388.3118] manager: (tapc4bc1257-24): new Generic device (/org/freedesktop/NetworkManager/Devices/44) Feb 20 04:56:28 localhost systemd-udevd[320510]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:56:28 localhost ovn_controller[155916]: 2026-02-20T09:56:28Z|00215|binding|INFO|Claiming lport c4bc1257-2433-4dc5-b4b1-8e986440f7d8 for this chassis. Feb 20 04:56:28 localhost ovn_controller[155916]: 2026-02-20T09:56:28Z|00216|binding|INFO|c4bc1257-2433-4dc5-b4b1-8e986440f7d8: Claiming unknown Feb 20 04:56:28 localhost nova_compute[280804]: 2026-02-20 09:56:28.314 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:28 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:28.323 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-34dc61c2-2cd5-48a1-a54d-350e15f73770', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34dc61c2-2cd5-48a1-a54d-350e15f73770', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b4d5592-ecf2-48cc-b3b1-c6ba46f9e5e6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c4bc1257-2433-4dc5-b4b1-8e986440f7d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:28 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:28.325 161766 INFO neutron.agent.ovn.metadata.agent [-] Port c4bc1257-2433-4dc5-b4b1-8e986440f7d8 in datapath 34dc61c2-2cd5-48a1-a54d-350e15f73770 bound to our chassis#033[00m Feb 20 04:56:28 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:28.326 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 34dc61c2-2cd5-48a1-a54d-350e15f73770 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:56:28 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:28.330 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[d57814a6-d339-4868-bbe9-80d07148f7cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:28 localhost journal[229367]: ethtool ioctl error on tapc4bc1257-24: No such device Feb 20 04:56:28 localhost nova_compute[280804]: 2026-02-20 09:56:28.340 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:28 localhost journal[229367]: ethtool ioctl error on tapc4bc1257-24: No such device Feb 20 04:56:28 localhost ovn_controller[155916]: 2026-02-20T09:56:28Z|00217|binding|INFO|Setting lport c4bc1257-2433-4dc5-b4b1-8e986440f7d8 ovn-installed in OVS Feb 20 04:56:28 localhost ovn_controller[155916]: 2026-02-20T09:56:28Z|00218|binding|INFO|Setting lport c4bc1257-2433-4dc5-b4b1-8e986440f7d8 up in Southbound Feb 20 04:56:28 localhost nova_compute[280804]: 2026-02-20 09:56:28.346 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:28 localhost nova_compute[280804]: 2026-02-20 09:56:28.349 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:28 localhost journal[229367]: ethtool ioctl error on tapc4bc1257-24: No such device Feb 20 04:56:28 localhost journal[229367]: ethtool ioctl error on tapc4bc1257-24: No such device Feb 20 04:56:28 localhost journal[229367]: ethtool ioctl error on tapc4bc1257-24: No such device Feb 20 04:56:28 localhost journal[229367]: ethtool ioctl error on tapc4bc1257-24: No such device Feb 20 04:56:28 localhost journal[229367]: ethtool ioctl error on tapc4bc1257-24: No such device Feb 20 04:56:28 localhost journal[229367]: ethtool ioctl error on tapc4bc1257-24: No such device Feb 20 04:56:28 localhost nova_compute[280804]: 2026-02-20 09:56:28.389 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:28 localhost nova_compute[280804]: 2026-02-20 09:56:28.422 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:28 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:28.587 2 INFO neutron.agent.securitygroups_rpc [None req-6a8272e4-f5a1-42d2-a801-cea63c76a8af f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:28 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:28.898 2 INFO neutron.agent.securitygroups_rpc [None req-222fdb52-3334-45ab-8f45-945b32b8d031 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']#033[00m Feb 20 04:56:29 localhost systemd[1]: tmp-crun.uV7Es5.mount: Deactivated successfully. Feb 20 04:56:29 localhost podman[320576]: 2026-02-20 09:56:29.118302449 +0000 UTC m=+0.066135439 container kill 0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11635c27-2f1f-4cdc-b82c-c6286cc4d35d, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:56:29 localhost dnsmasq[319650]: read /var/lib/neutron/dhcp/11635c27-2f1f-4cdc-b82c-c6286cc4d35d/addn_hosts - 0 addresses Feb 20 04:56:29 localhost dnsmasq-dhcp[319650]: read /var/lib/neutron/dhcp/11635c27-2f1f-4cdc-b82c-c6286cc4d35d/host Feb 20 04:56:29 localhost dnsmasq-dhcp[319650]: read /var/lib/neutron/dhcp/11635c27-2f1f-4cdc-b82c-c6286cc4d35d/opts Feb 20 04:56:29 localhost podman[320615]: Feb 20 04:56:29 localhost podman[320615]: 2026-02-20 09:56:29.318371796 +0000 UTC m=+0.100741248 container create e928c0d022757b006281548a3875e3fb44b7a1ddbd4e57fba884064b0cc5cd8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 20 04:56:29 localhost nova_compute[280804]: 2026-02-20 09:56:29.356 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:29 localhost kernel: device tapea48be00-a2 left promiscuous mode Feb 20 04:56:29 localhost ovn_controller[155916]: 2026-02-20T09:56:29Z|00219|binding|INFO|Releasing lport ea48be00-a2cd-4321-9150-d0fc187f96bf from this chassis (sb_readonly=0) Feb 20 04:56:29 localhost ovn_controller[155916]: 2026-02-20T09:56:29Z|00220|binding|INFO|Setting lport ea48be00-a2cd-4321-9150-d0fc187f96bf down in Southbound Feb 20 04:56:29 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:29.368 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-11635c27-2f1f-4cdc-b82c-c6286cc4d35d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11635c27-2f1f-4cdc-b82c-c6286cc4d35d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8aa5b5a34cfe458d96fea87261361db1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8e7d65-d3d5-471b-9cfb-e06b83af88aa, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ea48be00-a2cd-4321-9150-d0fc187f96bf) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:29 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:29.370 161766 INFO neutron.agent.ovn.metadata.agent [-] Port ea48be00-a2cd-4321-9150-d0fc187f96bf in datapath 11635c27-2f1f-4cdc-b82c-c6286cc4d35d unbound from our chassis#033[00m Feb 20 04:56:29 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:29.372 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 11635c27-2f1f-4cdc-b82c-c6286cc4d35d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:29 localhost podman[320615]: 2026-02-20 09:56:29.273479113 +0000 UTC m=+0.055848595 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:56:29 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:29.373 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9f9e05-8ba4-4e13-8a67-f8230e945cd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:29 localhost systemd[1]: Started libpod-conmon-e928c0d022757b006281548a3875e3fb44b7a1ddbd4e57fba884064b0cc5cd8e.scope. Feb 20 04:56:29 localhost nova_compute[280804]: 2026-02-20 09:56:29.389 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:29 localhost systemd[1]: Started libcrun container. Feb 20 04:56:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea4906542ed4e2ec849c3f88a420a45081cc28f0b788ad127ab64ee6cf6d626/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:56:29 localhost podman[320615]: 2026-02-20 09:56:29.417883981 +0000 UTC m=+0.200253443 container init e928c0d022757b006281548a3875e3fb44b7a1ddbd4e57fba884064b0cc5cd8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 20 04:56:29 localhost podman[320615]: 2026-02-20 09:56:29.427522447 +0000 UTC m=+0.209891909 container start e928c0d022757b006281548a3875e3fb44b7a1ddbd4e57fba884064b0cc5cd8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:56:29 localhost dnsmasq[320639]: started, version 2.85 cachesize 150 Feb 20 04:56:29 localhost dnsmasq[320639]: DNS service limited to local subnets Feb 20 04:56:29 localhost dnsmasq[320639]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:56:29 localhost dnsmasq[320639]: warning: no upstream servers configured Feb 20 04:56:29 localhost dnsmasq-dhcp[320639]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:56:29 localhost dnsmasq[320639]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/addn_hosts - 0 addresses Feb 20 04:56:29 localhost dnsmasq-dhcp[320639]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/host Feb 20 04:56:29 localhost dnsmasq-dhcp[320639]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/opts Feb 20 04:56:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 32 op/s Feb 20 04:56:29 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:29.553 263745 INFO neutron.agent.dhcp.agent [None req-c383b7db-fc27-4d75-ae87-a69e15a3fa93 - - - - - -] DHCP configuration for ports {'dee4bf28-462f-4e5a-bb37-08fba06228d7'} is completed#033[00m Feb 20 04:56:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "da4b4a91-1d53-4396-80dd-4a8521885337", "format": "json"}]: dispatch Feb 20 04:56:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:da4b4a91-1d53-4396-80dd-4a8521885337, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:da4b4a91-1d53-4396-80dd-4a8521885337, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:29 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'da4b4a91-1d53-4396-80dd-4a8521885337' of type subvolume Feb 20 04:56:29 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:56:29.621+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'da4b4a91-1d53-4396-80dd-4a8521885337' of type subvolume Feb 20 04:56:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "da4b4a91-1d53-4396-80dd-4a8521885337", "force": true, "format": "json"}]: dispatch Feb 20 04:56:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:da4b4a91-1d53-4396-80dd-4a8521885337, vol_name:cephfs) < "" Feb 20 04:56:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/da4b4a91-1d53-4396-80dd-4a8521885337'' moved to trashcan Feb 20 04:56:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:56:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:da4b4a91-1d53-4396-80dd-4a8521885337, vol_name:cephfs) < "" Feb 20 04:56:30 localhost nova_compute[280804]: 2026-02-20 09:56:30.018 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:30 localhost dnsmasq[319650]: exiting on receipt of SIGTERM Feb 20 04:56:30 localhost podman[320655]: 2026-02-20 09:56:30.100165563 +0000 UTC m=+0.071245024 container kill 0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11635c27-2f1f-4cdc-b82c-c6286cc4d35d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:56:30 localhost systemd[1]: libpod-0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b.scope: Deactivated successfully. Feb 20 04:56:30 localhost podman[320670]: 2026-02-20 09:56:30.158309898 +0000 UTC m=+0.043152867 container died 0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11635c27-2f1f-4cdc-b82c-c6286cc4d35d, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:56:30 localhost systemd[1]: tmp-crun.9x8E8R.mount: Deactivated successfully. Feb 20 04:56:30 localhost podman[320670]: 2026-02-20 09:56:30.196812652 +0000 UTC m=+0.081655621 container cleanup 0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11635c27-2f1f-4cdc-b82c-c6286cc4d35d, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 20 04:56:30 localhost systemd[1]: libpod-conmon-0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b.scope: Deactivated successfully. Feb 20 04:56:30 localhost podman[320671]: 2026-02-20 09:56:30.243697717 +0000 UTC m=+0.119913997 container remove 0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11635c27-2f1f-4cdc-b82c-c6286cc4d35d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:56:30 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:30.270 263745 INFO neutron.agent.dhcp.agent [None req-9379f427-a755-4357-b1f9-4caec579ac04 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:30 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:30.271 263745 INFO neutron.agent.dhcp.agent [None req-9379f427-a755-4357-b1f9-4caec579ac04 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:30 localhost dnsmasq[320639]: exiting on receipt of SIGTERM Feb 20 04:56:30 localhost systemd[1]: libpod-e928c0d022757b006281548a3875e3fb44b7a1ddbd4e57fba884064b0cc5cd8e.scope: Deactivated successfully. Feb 20 04:56:30 localhost podman[320714]: 2026-02-20 09:56:30.407197173 +0000 UTC m=+0.066857808 container kill e928c0d022757b006281548a3875e3fb44b7a1ddbd4e57fba884064b0cc5cd8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:56:30 localhost nova_compute[280804]: 2026-02-20 09:56:30.448 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:30 localhost podman[320727]: 2026-02-20 09:56:30.482088573 +0000 UTC m=+0.061415393 container died e928c0d022757b006281548a3875e3fb44b7a1ddbd4e57fba884064b0cc5cd8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:56:30 localhost podman[320727]: 2026-02-20 09:56:30.509932333 +0000 UTC m=+0.089259113 container cleanup e928c0d022757b006281548a3875e3fb44b7a1ddbd4e57fba884064b0cc5cd8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:56:30 localhost systemd[1]: libpod-conmon-e928c0d022757b006281548a3875e3fb44b7a1ddbd4e57fba884064b0cc5cd8e.scope: Deactivated successfully. Feb 20 04:56:30 localhost podman[320734]: 2026-02-20 09:56:30.555693969 +0000 UTC m=+0.122421214 container remove e928c0d022757b006281548a3875e3fb44b7a1ddbd4e57fba884064b0cc5cd8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:56:31 localhost systemd[1]: var-lib-containers-storage-overlay-3ea4906542ed4e2ec849c3f88a420a45081cc28f0b788ad127ab64ee6cf6d626-merged.mount: Deactivated successfully. Feb 20 04:56:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e928c0d022757b006281548a3875e3fb44b7a1ddbd4e57fba884064b0cc5cd8e-userdata-shm.mount: Deactivated successfully. Feb 20 04:56:31 localhost systemd[1]: var-lib-containers-storage-overlay-2cecfd5375396ec077518f56ec9d4e6b7688786b1044734124129e9e3b994a25-merged.mount: Deactivated successfully. Feb 20 04:56:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ce51f708f4cf144a463deb0544015f22450b8ec300338efabfa1534ebb2548b-userdata-shm.mount: Deactivated successfully. Feb 20 04:56:31 localhost systemd[1]: run-netns-qdhcp\x2d11635c27\x2d2f1f\x2d4cdc\x2db82c\x2dc6286cc4d35d.mount: Deactivated successfully. Feb 20 04:56:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 30 op/s Feb 20 04:56:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:31.685 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:83:b0 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-34dc61c2-2cd5-48a1-a54d-350e15f73770', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34dc61c2-2cd5-48a1-a54d-350e15f73770', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b4d5592-ecf2-48cc-b3b1-c6ba46f9e5e6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=dee4bf28-462f-4e5a-bb37-08fba06228d7) old=Port_Binding(mac=['fa:16:3e:ce:83:b0 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-34dc61c2-2cd5-48a1-a54d-350e15f73770', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34dc61c2-2cd5-48a1-a54d-350e15f73770', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:31.688 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port dee4bf28-462f-4e5a-bb37-08fba06228d7 in datapath 34dc61c2-2cd5-48a1-a54d-350e15f73770 updated#033[00m Feb 20 04:56:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:31.691 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port ff3e8619-6a83-452e-abac-a678643b587c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:56:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:31.691 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34dc61c2-2cd5-48a1-a54d-350e15f73770, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:31.693 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[aad3b530-d59d-402c-a964-e8cbd436f286]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:31 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:31.797 2 INFO neutron.agent.securitygroups_rpc [None req-c16a47d9-8c3c-4273-8f35-4d2edcf8a46b f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:32 localhost podman[320806]: Feb 20 04:56:32 localhost podman[320806]: 2026-02-20 09:56:32.799200573 +0000 UTC m=+0.090506737 container create 427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 20 04:56:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:56:32 localhost systemd[1]: Started libpod-conmon-427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340.scope. Feb 20 04:56:32 localhost systemd[1]: Started libcrun container. Feb 20 04:56:32 localhost podman[320806]: 2026-02-20 09:56:32.757267708 +0000 UTC m=+0.048573872 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:56:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f57c0c75078bf4958127caeef20809a60545cd1027bccd3e2a79cf0727d059/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:56:32 localhost podman[320806]: 2026-02-20 09:56:32.869340027 +0000 UTC m=+0.160646171 container init 427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:56:32 localhost podman[320806]: 2026-02-20 09:56:32.87888193 +0000 UTC m=+0.170188064 container start 427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:56:32 localhost dnsmasq[320825]: started, version 2.85 cachesize 150 Feb 20 04:56:32 localhost dnsmasq[320825]: DNS service limited to local subnets Feb 20 04:56:32 localhost dnsmasq[320825]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:56:32 localhost dnsmasq[320825]: warning: no upstream servers configured Feb 20 04:56:32 localhost dnsmasq-dhcp[320825]: DHCP, static leases only on 10.100.0.16, lease time 1d Feb 20 04:56:32 localhost dnsmasq-dhcp[320825]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:56:32 localhost dnsmasq[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/addn_hosts - 0 addresses Feb 20 04:56:32 localhost dnsmasq-dhcp[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/host Feb 20 04:56:32 localhost dnsmasq-dhcp[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/opts Feb 20 04:56:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "81bbf3cf-dd05-49ca-a680-c731aa18f72b", "format": "json"}]: dispatch Feb 20 04:56:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:81bbf3cf-dd05-49ca-a680-c731aa18f72b, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:81bbf3cf-dd05-49ca-a680-c731aa18f72b, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:33 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:33.092 263745 INFO neutron.agent.dhcp.agent [None req-234ea4d0-1884-4897-b04d-aff7ea385f82 - - - - - -] DHCP configuration for ports {'dee4bf28-462f-4e5a-bb37-08fba06228d7', 'c4bc1257-2433-4dc5-b4b1-8e986440f7d8'} is completed#033[00m Feb 20 04:56:33 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:33.112 2 INFO neutron.agent.securitygroups_rpc [None req-b75820d6-6baf-4494-b7b1-8acd63dcbbd9 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:33 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:33.145 2 INFO neutron.agent.securitygroups_rpc [None req-8fddf0ed-4d67-47fd-a98a-ec6a15c12895 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:33 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:33.199 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:32Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=19087a30-fd15-42f1-b0fc-5cae591cd8b7, ip_allocation=immediate, mac_address=fa:16:3e:64:78:be, name=tempest-PortsTestJSON-237399456, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:56:25Z, description=, dns_domain=, id=34dc61c2-2cd5-48a1-a54d-350e15f73770, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-717230830, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4041, qos_policy_id=None, revision_number=3, router:external=False, shared=False, standard_attr_id=2312, status=ACTIVE, subnets=['bb8ca766-99e6-495d-81cd-1fee4418e257', 'd52f82c4-f67a-4371-94c3-2e2c4373e82c'], tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:56:29Z, vlan_transparent=None, network_id=34dc61c2-2cd5-48a1-a54d-350e15f73770, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['72ed92b6-af24-4274-854b-a52220405faf'], standard_attr_id=2333, status=DOWN, tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:56:32Z on network 34dc61c2-2cd5-48a1-a54d-350e15f73770#033[00m Feb 20 04:56:33 localhost nova_compute[280804]: 2026-02-20 09:56:33.211 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:33 localhost dnsmasq[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/addn_hosts - 2 addresses Feb 20 04:56:33 localhost dnsmasq-dhcp[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/host Feb 20 04:56:33 localhost podman[320844]: 2026-02-20 09:56:33.437671921 +0000 UTC m=+0.058968198 container kill 427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127) Feb 20 04:56:33 localhost dnsmasq-dhcp[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/opts Feb 20 04:56:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v309: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 14 KiB/s wr, 16 op/s Feb 20 04:56:33 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:33.644 263745 INFO neutron.agent.dhcp.agent [None req-2e384f4c-06d6-4bcb-b42d-3e5cecd221e0 - - - - - -] DHCP configuration for ports {'19087a30-fd15-42f1-b0fc-5cae591cd8b7'} is completed#033[00m Feb 20 04:56:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e152 do_prune osdmap full prune enabled Feb 20 04:56:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e153 e153: 6 total, 6 up, 6 in Feb 20 04:56:33 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e153: 6 total, 6 up, 6 in Feb 20 04:56:34 localhost nova_compute[280804]: 2026-02-20 09:56:34.128 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:56:34 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:34.446 2 INFO neutron.agent.securitygroups_rpc [None req-ceb6fbf0-e236-46d5-ab31-4b9208acd398 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:34 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:34.504 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:32Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=19087a30-fd15-42f1-b0fc-5cae591cd8b7, ip_allocation=immediate, mac_address=fa:16:3e:64:78:be, name=tempest-PortsTestJSON-237399456, network_id=34dc61c2-2cd5-48a1-a54d-350e15f73770, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['72ed92b6-af24-4274-854b-a52220405faf'], standard_attr_id=2333, status=DOWN, tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:56:33Z on network 34dc61c2-2cd5-48a1-a54d-350e15f73770#033[00m Feb 20 04:56:34 localhost dnsmasq[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/addn_hosts - 1 addresses Feb 20 04:56:34 localhost dnsmasq-dhcp[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/host Feb 20 04:56:34 localhost dnsmasq-dhcp[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/opts Feb 20 04:56:34 localhost podman[320881]: 2026-02-20 09:56:34.726518953 +0000 UTC m=+0.047304319 container kill 427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 20 04:56:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e153 do_prune osdmap full prune enabled Feb 20 04:56:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e154 e154: 6 total, 6 up, 6 in Feb 20 04:56:34 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e154: 6 total, 6 up, 6 in Feb 20 04:56:35 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:35.010 263745 INFO neutron.agent.dhcp.agent [None req-5e276d85-7c28-4a7f-945e-eb4af937c6db - - - - - -] DHCP configuration for ports {'19087a30-fd15-42f1-b0fc-5cae591cd8b7'} is completed#033[00m Feb 20 04:56:35 localhost nova_compute[280804]: 2026-02-20 09:56:35.021 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 4.6 KiB/s rd, 18 KiB/s wr, 11 op/s Feb 20 04:56:35 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:35.743 2 INFO neutron.agent.securitygroups_rpc [None req-21b5cfcb-ef7f-4dc6-82f5-46fe7ab7fc9a 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:35 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:35.775 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:32Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=19087a30-fd15-42f1-b0fc-5cae591cd8b7, ip_allocation=immediate, mac_address=fa:16:3e:64:78:be, name=tempest-PortsTestJSON-237399456, network_id=34dc61c2-2cd5-48a1-a54d-350e15f73770, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['72ed92b6-af24-4274-854b-a52220405faf'], standard_attr_id=2333, status=DOWN, tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:56:35Z on network 34dc61c2-2cd5-48a1-a54d-350e15f73770#033[00m Feb 20 04:56:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e154 do_prune osdmap full prune enabled Feb 20 04:56:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e155 e155: 6 total, 6 up, 6 in Feb 20 04:56:35 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e155: 6 total, 6 up, 6 in Feb 20 04:56:36 localhost dnsmasq[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/addn_hosts - 2 addresses Feb 20 04:56:36 localhost dnsmasq-dhcp[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/host Feb 20 04:56:36 localhost podman[320920]: 2026-02-20 09:56:36.027291902 +0000 UTC m=+0.044315309 container kill 427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:56:36 localhost dnsmasq-dhcp[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/opts Feb 20 04:56:36 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:36.282 263745 INFO neutron.agent.dhcp.agent [None req-34f7e094-60cd-43d9-8e07-c6f2d3f0d91b - - - - - -] DHCP configuration for ports {'19087a30-fd15-42f1-b0fc-5cae591cd8b7'} is completed#033[00m Feb 20 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:56:36 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "81bbf3cf-dd05-49ca-a680-c731aa18f72b_083f2f39-54db-4760-baba-9aefd6c5b6fc", "force": true, "format": "json"}]: dispatch Feb 20 04:56:36 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:81bbf3cf-dd05-49ca-a680-c731aa18f72b_083f2f39-54db-4760-baba-9aefd6c5b6fc, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:36 localhost podman[320943]: 2026-02-20 09:56:36.440553225 +0000 UTC m=+0.070180347 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Feb 20 04:56:36 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:56:36 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:56:36 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:81bbf3cf-dd05-49ca-a680-c731aa18f72b_083f2f39-54db-4760-baba-9aefd6c5b6fc, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:36 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "81bbf3cf-dd05-49ca-a680-c731aa18f72b", "force": true, "format": "json"}]: dispatch Feb 20 04:56:36 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:81bbf3cf-dd05-49ca-a680-c731aa18f72b, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:36 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:56:36 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:56:36 localhost podman[320942]: 2026-02-20 09:56:36.509743823 +0000 UTC m=+0.139226480 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.7, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container) Feb 20 04:56:36 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:81bbf3cf-dd05-49ca-a680-c731aa18f72b, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:36 localhost podman[320942]: 2026-02-20 09:56:36.524739552 +0000 UTC m=+0.154222229 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, architecture=x86_64, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.openshift.expose-services=) Feb 20 04:56:36 localhost podman[320943]: 2026-02-20 09:56:36.525010669 +0000 UTC m=+0.154637791 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:56:36 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:56:36 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:56:36 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:36.677 2 INFO neutron.agent.securitygroups_rpc [None req-5f75f844-b31b-4010-9a93-efcf0b2c4eb8 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:36 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:36.901 2 INFO neutron.agent.securitygroups_rpc [None req-a5e98a26-c124-4fdc-9abc-b12558eae8ef f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e155 do_prune osdmap full prune enabled Feb 20 04:56:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e156 e156: 6 total, 6 up, 6 in Feb 20 04:56:36 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e156: 6 total, 6 up, 6 in Feb 20 04:56:36 localhost dnsmasq[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/addn_hosts - 0 addresses Feb 20 04:56:36 localhost dnsmasq-dhcp[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/host Feb 20 04:56:36 localhost dnsmasq-dhcp[320825]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/opts Feb 20 04:56:36 localhost podman[320998]: 2026-02-20 09:56:36.982297842 +0000 UTC m=+0.073494694 container kill 427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 20 04:56:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 8.7 KiB/s rd, 23 KiB/s wr, 18 op/s Feb 20 04:56:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:56:38 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:38.027 2 INFO neutron.agent.securitygroups_rpc [None req-a827ab5a-214a-4a1d-a84d-cac050b991d6 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:38 localhost dnsmasq[320825]: exiting on receipt of SIGTERM Feb 20 04:56:38 localhost systemd[1]: libpod-427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340.scope: Deactivated successfully. Feb 20 04:56:38 localhost podman[321037]: 2026-02-20 09:56:38.122911984 +0000 UTC m=+0.066071537 container kill 427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2) Feb 20 04:56:38 localhost podman[321049]: 2026-02-20 09:56:38.19350204 +0000 UTC m=+0.061225798 container died 427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127) Feb 20 04:56:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340-userdata-shm.mount: Deactivated successfully. Feb 20 04:56:38 localhost systemd[1]: var-lib-containers-storage-overlay-b5f57c0c75078bf4958127caeef20809a60545cd1027bccd3e2a79cf0727d059-merged.mount: Deactivated successfully. Feb 20 04:56:38 localhost nova_compute[280804]: 2026-02-20 09:56:38.253 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:38 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9ac42b7-680c-41fc-8784-6176baa738f7", "format": "json"}]: dispatch Feb 20 04:56:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f9ac42b7-680c-41fc-8784-6176baa738f7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:38 localhost podman[321049]: 2026-02-20 09:56:38.30936906 +0000 UTC m=+0.177092768 container cleanup 427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:56:38 localhost systemd[1]: libpod-conmon-427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340.scope: Deactivated successfully. Feb 20 04:56:38 localhost podman[321056]: 2026-02-20 09:56:38.332963146 +0000 UTC m=+0.187222686 container remove 427b8015e241d8fcc7e764b141f2581769b61a04a041b7ecc6e6d602d41df340 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:56:38 localhost ovn_controller[155916]: 2026-02-20T09:56:38Z|00221|binding|INFO|Removing iface tapc4bc1257-24 ovn-installed in OVS Feb 20 04:56:38 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:38.942 161766 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port ff3e8619-6a83-452e-abac-a678643b587c with type ""#033[00m Feb 20 04:56:38 localhost ovn_controller[155916]: 2026-02-20T09:56:38Z|00222|binding|INFO|Removing lport c4bc1257-2433-4dc5-b4b1-8e986440f7d8 ovn-installed in OVS Feb 20 04:56:38 localhost nova_compute[280804]: 2026-02-20 09:56:38.944 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:38 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:38.944 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-34dc61c2-2cd5-48a1-a54d-350e15f73770', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34dc61c2-2cd5-48a1-a54d-350e15f73770', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b4d5592-ecf2-48cc-b3b1-c6ba46f9e5e6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c4bc1257-2433-4dc5-b4b1-8e986440f7d8) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:38 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:38.947 161766 INFO neutron.agent.ovn.metadata.agent [-] Port c4bc1257-2433-4dc5-b4b1-8e986440f7d8 in datapath 34dc61c2-2cd5-48a1-a54d-350e15f73770 unbound from our chassis#033[00m Feb 20 04:56:38 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:38.949 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34dc61c2-2cd5-48a1-a54d-350e15f73770, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:38 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:38.951 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[387b8bb8-c8c5-4864-8124-404db61cc1f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:38 localhost nova_compute[280804]: 2026-02-20 09:56:38.951 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e156 do_prune osdmap full prune enabled Feb 20 04:56:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e157 e157: 6 total, 6 up, 6 in Feb 20 04:56:38 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e157: 6 total, 6 up, 6 in Feb 20 04:56:39 localhost podman[321127]: Feb 20 04:56:39 localhost podman[321127]: 2026-02-20 09:56:39.215947282 +0000 UTC m=+0.079765410 container create dd7e6139b2eba3d1b2a8ed0ac21b5d2527071d7486240f02a0a8adccdfa839fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:56:39 localhost systemd[1]: Started libpod-conmon-dd7e6139b2eba3d1b2a8ed0ac21b5d2527071d7486240f02a0a8adccdfa839fe.scope. Feb 20 04:56:39 localhost systemd[1]: tmp-crun.Sj2Fns.mount: Deactivated successfully. Feb 20 04:56:39 localhost podman[321127]: 2026-02-20 09:56:39.181494687 +0000 UTC m=+0.045312855 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:56:39 localhost systemd[1]: Started libcrun container. Feb 20 04:56:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39611c390a058405d8dc8e3211089c1f70f2f09f6d54fc2aa23b46f2ccced298/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:56:39 localhost podman[321127]: 2026-02-20 09:56:39.301748523 +0000 UTC m=+0.165566651 container init dd7e6139b2eba3d1b2a8ed0ac21b5d2527071d7486240f02a0a8adccdfa839fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:56:39 localhost podman[321127]: 2026-02-20 09:56:39.31066803 +0000 UTC m=+0.174486168 container start dd7e6139b2eba3d1b2a8ed0ac21b5d2527071d7486240f02a0a8adccdfa839fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 20 04:56:39 localhost dnsmasq[321145]: started, version 2.85 cachesize 150 Feb 20 04:56:39 localhost dnsmasq[321145]: DNS service limited to local subnets Feb 20 04:56:39 localhost dnsmasq[321145]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:56:39 localhost dnsmasq[321145]: warning: no upstream servers configured Feb 20 04:56:39 localhost dnsmasq-dhcp[321145]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:56:39 localhost dnsmasq[321145]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/addn_hosts - 0 addresses Feb 20 04:56:39 localhost dnsmasq-dhcp[321145]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/host Feb 20 04:56:39 localhost dnsmasq-dhcp[321145]: read /var/lib/neutron/dhcp/34dc61c2-2cd5-48a1-a54d-350e15f73770/opts Feb 20 04:56:39 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:39.434 263745 INFO neutron.agent.dhcp.agent [None req-219308aa-0dce-4bdd-982e-e0455573da32 - - - - - -] DHCP configuration for ports {'dee4bf28-462f-4e5a-bb37-08fba06228d7', 'c4bc1257-2433-4dc5-b4b1-8e986440f7d8'} is completed#033[00m Feb 20 04:56:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v317: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 16 KiB/s wr, 65 op/s Feb 20 04:56:39 localhost dnsmasq[321145]: exiting on receipt of SIGTERM Feb 20 04:56:39 localhost podman[321163]: 2026-02-20 09:56:39.604127569 +0000 UTC m=+0.059453521 container kill dd7e6139b2eba3d1b2a8ed0ac21b5d2527071d7486240f02a0a8adccdfa839fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:56:39 localhost systemd[1]: libpod-dd7e6139b2eba3d1b2a8ed0ac21b5d2527071d7486240f02a0a8adccdfa839fe.scope: Deactivated successfully. Feb 20 04:56:39 localhost podman[321175]: 2026-02-20 09:56:39.677979941 +0000 UTC m=+0.058336021 container died dd7e6139b2eba3d1b2a8ed0ac21b5d2527071d7486240f02a0a8adccdfa839fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 20 04:56:39 localhost podman[321175]: 2026-02-20 09:56:39.706652163 +0000 UTC m=+0.087008203 container cleanup dd7e6139b2eba3d1b2a8ed0ac21b5d2527071d7486240f02a0a8adccdfa839fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:56:39 localhost systemd[1]: libpod-conmon-dd7e6139b2eba3d1b2a8ed0ac21b5d2527071d7486240f02a0a8adccdfa839fe.scope: Deactivated successfully. Feb 20 04:56:39 localhost podman[321177]: 2026-02-20 09:56:39.760757092 +0000 UTC m=+0.134946268 container remove dd7e6139b2eba3d1b2a8ed0ac21b5d2527071d7486240f02a0a8adccdfa839fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-34dc61c2-2cd5-48a1-a54d-350e15f73770, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 20 04:56:39 localhost nova_compute[280804]: 2026-02-20 09:56:39.802 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:39 localhost kernel: device tapc4bc1257-24 left promiscuous mode Feb 20 04:56:39 localhost nova_compute[280804]: 2026-02-20 09:56:39.812 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:39 localhost nova_compute[280804]: 2026-02-20 09:56:39.831 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:39 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:39.852 263745 INFO neutron.agent.dhcp.agent [None req-f0de5705-e06d-4b0b-a635-e8b66447e984 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:39 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:39.853 263745 INFO neutron.agent.dhcp.agent [None req-f0de5705-e06d-4b0b-a635-e8b66447e984 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e157 do_prune osdmap full prune enabled Feb 20 04:56:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e158 e158: 6 total, 6 up, 6 in Feb 20 04:56:39 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e158: 6 total, 6 up, 6 in Feb 20 04:56:40 localhost nova_compute[280804]: 2026-02-20 09:56:40.024 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:40 localhost systemd[1]: var-lib-containers-storage-overlay-39611c390a058405d8dc8e3211089c1f70f2f09f6d54fc2aa23b46f2ccced298-merged.mount: Deactivated successfully. Feb 20 04:56:40 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd7e6139b2eba3d1b2a8ed0ac21b5d2527071d7486240f02a0a8adccdfa839fe-userdata-shm.mount: Deactivated successfully. Feb 20 04:56:40 localhost systemd[1]: run-netns-qdhcp\x2d34dc61c2\x2d2cd5\x2d48a1\x2da54d\x2d350e15f73770.mount: Deactivated successfully. Feb 20 04:56:41 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:56:41 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:56:41 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:56:41 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:56:41 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:56:41 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:56:41 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev d76231ad-a5cb-450b-a588-d7aa0033c7ee (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:56:41 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev d76231ad-a5cb-450b-a588-d7aa0033c7ee (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:56:41 localhost ceph-mgr[286565]: [progress INFO root] Completed event d76231ad-a5cb-450b-a588-d7aa0033c7ee (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:56:41 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:56:41 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:56:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v319: 177 pgs: 177 active+clean; 146 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 94 KiB/s rd, 16 KiB/s wr, 132 op/s Feb 20 04:56:41 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:41.720 2 INFO neutron.agent.securitygroups_rpc [None req-13f83c28-0ec5-483d-8133-f11a853f0aba f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f9ac42b7-680c-41fc-8784-6176baa738f7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:42 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:56:42 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:56:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f9ac42b7-680c-41fc-8784-6176baa738f7", "format": "json"}]: dispatch Feb 20 04:56:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f9ac42b7-680c-41fc-8784-6176baa738f7, vol_name:cephfs) < "" Feb 20 04:56:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f9ac42b7-680c-41fc-8784-6176baa738f7, vol_name:cephfs) < "" Feb 20 04:56:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "8b7ef8b3-43af-41f4-8a24-ef7ee4fe0934", "format": "json"}]: dispatch Feb 20 04:56:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8b7ef8b3-43af-41f4-8a24-ef7ee4fe0934, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8b7ef8b3-43af-41f4-8a24-ef7ee4fe0934, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:42 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:42.620 2 INFO neutron.agent.securitygroups_rpc [None req-92a7b9d6-6b07-465f-9755-118a416fc381 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:42 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:42.681 263745 INFO neutron.agent.linux.ip_lib [None req-b4551449-1e0c-40a1-bde9-e75e65c02406 - - - - - -] Device tap1ad2bca8-eb cannot be used as it has no MAC address#033[00m Feb 20 04:56:42 localhost nova_compute[280804]: 2026-02-20 09:56:42.733 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:42 localhost kernel: device tap1ad2bca8-eb entered promiscuous mode Feb 20 04:56:42 localhost ovn_controller[155916]: 2026-02-20T09:56:42Z|00223|binding|INFO|Claiming lport 1ad2bca8-eb88-428b-85c9-ec3a36819749 for this chassis. Feb 20 04:56:42 localhost ovn_controller[155916]: 2026-02-20T09:56:42Z|00224|binding|INFO|1ad2bca8-eb88-428b-85c9-ec3a36819749: Claiming unknown Feb 20 04:56:42 localhost NetworkManager[5967]: [1771581402.7419] manager: (tap1ad2bca8-eb): new Generic device (/org/freedesktop/NetworkManager/Devices/45) Feb 20 04:56:42 localhost nova_compute[280804]: 2026-02-20 09:56:42.741 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:42 localhost systemd-udevd[321300]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:56:42 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:42.753 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-1b263d89-a9bd-4e8c-ba1c-797a615fed4b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b263d89-a9bd-4e8c-ba1c-797a615fed4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12ff90ec-c935-4548-9bd2-f97ed2a11db4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1ad2bca8-eb88-428b-85c9-ec3a36819749) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:42 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:42.755 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 1ad2bca8-eb88-428b-85c9-ec3a36819749 in datapath 1b263d89-a9bd-4e8c-ba1c-797a615fed4b bound to our chassis#033[00m Feb 20 04:56:42 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:42.756 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b263d89-a9bd-4e8c-ba1c-797a615fed4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:56:42 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:42.760 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[02ed85c3-0313-4584-98fa-f1005dfe7998]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:42 localhost journal[229367]: ethtool ioctl error on tap1ad2bca8-eb: No such device Feb 20 04:56:42 localhost journal[229367]: ethtool ioctl error on tap1ad2bca8-eb: No such device Feb 20 04:56:42 localhost journal[229367]: ethtool ioctl error on tap1ad2bca8-eb: No such device Feb 20 04:56:42 localhost journal[229367]: ethtool ioctl error on tap1ad2bca8-eb: No such device Feb 20 04:56:42 localhost ovn_controller[155916]: 2026-02-20T09:56:42Z|00225|binding|INFO|Setting lport 1ad2bca8-eb88-428b-85c9-ec3a36819749 ovn-installed in OVS Feb 20 04:56:42 localhost ovn_controller[155916]: 2026-02-20T09:56:42Z|00226|binding|INFO|Setting lport 1ad2bca8-eb88-428b-85c9-ec3a36819749 up in Southbound Feb 20 04:56:42 localhost nova_compute[280804]: 2026-02-20 09:56:42.785 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:42 localhost nova_compute[280804]: 2026-02-20 09:56:42.787 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:42 localhost journal[229367]: ethtool ioctl error on tap1ad2bca8-eb: No such device Feb 20 04:56:42 localhost journal[229367]: ethtool ioctl error on tap1ad2bca8-eb: No such device Feb 20 04:56:42 localhost journal[229367]: ethtool ioctl error on tap1ad2bca8-eb: No such device Feb 20 04:56:42 localhost journal[229367]: ethtool ioctl error on tap1ad2bca8-eb: No such device Feb 20 04:56:42 localhost nova_compute[280804]: 2026-02-20 09:56:42.821 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:56:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e158 do_prune osdmap full prune enabled Feb 20 04:56:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e159 e159: 6 total, 6 up, 6 in Feb 20 04:56:42 localhost nova_compute[280804]: 2026-02-20 09:56:42.847 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:42 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e159: 6 total, 6 up, 6 in Feb 20 04:56:43 localhost nova_compute[280804]: 2026-02-20 09:56:43.256 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 146 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 15 KiB/s wr, 123 op/s Feb 20 04:56:43 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:56:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:56:43 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:56:43 localhost podman[321371]: Feb 20 04:56:43 localhost podman[321371]: 2026-02-20 09:56:43.687650942 +0000 UTC m=+0.092619073 container create 0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b263d89-a9bd-4e8c-ba1c-797a615fed4b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:56:43 localhost systemd[1]: Started libpod-conmon-0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311.scope. Feb 20 04:56:43 localhost podman[321371]: 2026-02-20 09:56:43.642776769 +0000 UTC m=+0.047744930 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:56:43 localhost systemd[1]: Started libcrun container. Feb 20 04:56:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d48b6ad595004840b9114e53be8c32f097f8b519937650645d24d09267e044/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:56:43 localhost podman[321371]: 2026-02-20 09:56:43.768611723 +0000 UTC m=+0.173579854 container init 0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b263d89-a9bd-4e8c-ba1c-797a615fed4b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:56:43 localhost podman[321371]: 2026-02-20 09:56:43.779064511 +0000 UTC m=+0.184032642 container start 0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b263d89-a9bd-4e8c-ba1c-797a615fed4b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true) Feb 20 04:56:43 localhost dnsmasq[321389]: started, version 2.85 cachesize 150 Feb 20 04:56:43 localhost dnsmasq[321389]: DNS service limited to local subnets Feb 20 04:56:43 localhost dnsmasq[321389]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:56:43 localhost dnsmasq[321389]: warning: no upstream servers configured Feb 20 04:56:43 localhost dnsmasq-dhcp[321389]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:56:43 localhost dnsmasq[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/addn_hosts - 0 addresses Feb 20 04:56:43 localhost dnsmasq-dhcp[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/host Feb 20 04:56:43 localhost dnsmasq-dhcp[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/opts Feb 20 04:56:43 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:43.910 263745 INFO neutron.agent.dhcp.agent [None req-d692ec3b-b2fb-467a-b29f-3bc98d545212 - - - - - -] DHCP configuration for ports {'0ceac840-7607-4be9-973b-b16296ebf834'} is completed#033[00m Feb 20 04:56:44 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:44.144 2 INFO neutron.agent.securitygroups_rpc [None req-ec1ba1f0-724c-41a9-85b6-8188470faaf7 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:44 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:44.203 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:43Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6f403b7b-a429-4672-8a4f-301bd825f1bf, ip_allocation=immediate, mac_address=fa:16:3e:7c:61:d7, name=tempest-PortsTestJSON-782720984, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:56:40Z, description=, dns_domain=, id=1b263d89-a9bd-4e8c-ba1c-797a615fed4b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-512937559, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=31655, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2362, status=ACTIVE, subnets=['0b119301-4faf-4b5f-94fe-05438f323e69'], tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:56:41Z, vlan_transparent=None, network_id=1b263d89-a9bd-4e8c-ba1c-797a615fed4b, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['72ed92b6-af24-4274-854b-a52220405faf'], standard_attr_id=2378, status=DOWN, tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:56:43Z on network 1b263d89-a9bd-4e8c-ba1c-797a615fed4b#033[00m Feb 20 04:56:44 localhost dnsmasq[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/addn_hosts - 1 addresses Feb 20 04:56:44 localhost dnsmasq-dhcp[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/host Feb 20 04:56:44 localhost podman[321407]: 2026-02-20 09:56:44.401170144 +0000 UTC m=+0.043664231 container kill 0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b263d89-a9bd-4e8c-ba1c-797a615fed4b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:56:44 localhost dnsmasq-dhcp[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/opts Feb 20 04:56:44 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:56:44 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:44.668 263745 INFO neutron.agent.dhcp.agent [None req-10d51c28-2e89-4dae-8d6f-1f3667888ac1 - - - - - -] DHCP configuration for ports {'6f403b7b-a429-4672-8a4f-301bd825f1bf'} is completed#033[00m Feb 20 04:56:44 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:44.890 2 INFO neutron.agent.securitygroups_rpc [None req-868e4387-1930-45d7-9199-5bcd1f2558e0 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:45 localhost nova_compute[280804]: 2026-02-20 09:56:45.028 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:45 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:45.178 2 INFO neutron.agent.securitygroups_rpc [None req-d92a2777-d32e-4211-954b-8d8918f6f596 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:45 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:45.238 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:44Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7fd36f8d-a6c6-477e-a1f3-69cf362e6562, ip_allocation=immediate, mac_address=fa:16:3e:cd:db:55, name=tempest-PortsTestJSON-96137615, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:56:40Z, description=, dns_domain=, id=1b263d89-a9bd-4e8c-ba1c-797a615fed4b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-512937559, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=31655, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2362, status=ACTIVE, subnets=['0b119301-4faf-4b5f-94fe-05438f323e69'], tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:56:41Z, vlan_transparent=None, network_id=1b263d89-a9bd-4e8c-ba1c-797a615fed4b, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['72ed92b6-af24-4274-854b-a52220405faf'], standard_attr_id=2382, status=DOWN, tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:56:44Z on network 1b263d89-a9bd-4e8c-ba1c-797a615fed4b#033[00m Feb 20 04:56:45 localhost dnsmasq[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/addn_hosts - 2 addresses Feb 20 04:56:45 localhost dnsmasq-dhcp[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/host Feb 20 04:56:45 localhost podman[321445]: 2026-02-20 09:56:45.459191462 +0000 UTC m=+0.062022159 container kill 0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b263d89-a9bd-4e8c-ba1c-797a615fed4b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:56:45 localhost dnsmasq-dhcp[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/opts Feb 20 04:56:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 146 MiB data, 821 MiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 26 KiB/s wr, 158 op/s Feb 20 04:56:45 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "877bc86e-3ae3-4788-a4cd-ecc7ec5a94e2", "format": "json"}]: dispatch Feb 20 04:56:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:877bc86e-3ae3-4788-a4cd-ecc7ec5a94e2, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:877bc86e-3ae3-4788-a4cd-ecc7ec5a94e2, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:46 localhost sshd[321465]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:56:46 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:46.076 263745 INFO neutron.agent.dhcp.agent [None req-e3627a5d-544d-483b-b43e-1db8f7641f72 - - - - - -] DHCP configuration for ports {'7fd36f8d-a6c6-477e-a1f3-69cf362e6562'} is completed#033[00m Feb 20 04:56:46 localhost podman[241347]: time="2026-02-20T09:56:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:56:46 localhost podman[241347]: @ - - [20/Feb/2026:09:56:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159540 "" "Go-http-client/1.1" Feb 20 04:56:46 localhost podman[241347]: @ - - [20/Feb/2026:09:56:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19250 "" "Go-http-client/1.1" Feb 20 04:56:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:56:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:56:46 localhost podman[321467]: 2026-02-20 09:56:46.449904932 +0000 UTC m=+0.088761720 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2) Feb 20 04:56:46 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:46.485 2 INFO neutron.agent.securitygroups_rpc [None req-d647a860-3cfb-47b1-bd0e-3817969b125e f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:46 localhost podman[321468]: 2026-02-20 09:56:46.510610684 +0000 UTC m=+0.146385491 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0) Feb 20 04:56:46 localhost podman[321467]: 2026-02-20 09:56:46.527775441 +0000 UTC m=+0.166632209 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 20 04:56:46 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9ac42b7-680c-41fc-8784-6176baa738f7", "format": "json"}]: dispatch Feb 20 04:56:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f9ac42b7-680c-41fc-8784-6176baa738f7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:46 localhost podman[321468]: 2026-02-20 09:56:46.541005322 +0000 UTC m=+0.176780069 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 20 04:56:46 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:56:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f9ac42b7-680c-41fc-8784-6176baa738f7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:46 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f9ac42b7-680c-41fc-8784-6176baa738f7", "force": true, "format": "json"}]: dispatch Feb 20 04:56:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f9ac42b7-680c-41fc-8784-6176baa738f7, vol_name:cephfs) < "" Feb 20 04:56:46 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:56:46 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f9ac42b7-680c-41fc-8784-6176baa738f7'' moved to trashcan Feb 20 04:56:46 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:56:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f9ac42b7-680c-41fc-8784-6176baa738f7, vol_name:cephfs) < "" Feb 20 04:56:47 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:47.005 2 INFO neutron.agent.securitygroups_rpc [None req-5e0be3b9-12ef-421f-8325-abea826190b6 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:47 localhost dnsmasq[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/addn_hosts - 1 addresses Feb 20 04:56:47 localhost dnsmasq-dhcp[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/host Feb 20 04:56:47 localhost dnsmasq-dhcp[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/opts Feb 20 04:56:47 localhost podman[321527]: 2026-02-20 09:56:47.264511 +0000 UTC m=+0.063380265 container kill 0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b263d89-a9bd-4e8c-ba1c-797a615fed4b, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:56:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 146 MiB data, 821 MiB used, 41 GiB / 42 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 91 op/s Feb 20 04:56:47 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:47.579 2 INFO neutron.agent.securitygroups_rpc [None req-c2e08264-5759-4dbe-9f11-1020e63a5df8 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:56:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e159 do_prune osdmap full prune enabled Feb 20 04:56:47 localhost dnsmasq[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/addn_hosts - 0 addresses Feb 20 04:56:47 localhost podman[321564]: 2026-02-20 09:56:47.848926522 +0000 UTC m=+0.059480192 container kill 0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b263d89-a9bd-4e8c-ba1c-797a615fed4b, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 20 04:56:47 localhost dnsmasq-dhcp[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/host Feb 20 04:56:47 localhost dnsmasq-dhcp[321389]: read /var/lib/neutron/dhcp/1b263d89-a9bd-4e8c-ba1c-797a615fed4b/opts Feb 20 04:56:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e160 e160: 6 total, 6 up, 6 in Feb 20 04:56:47 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e160: 6 total, 6 up, 6 in Feb 20 04:56:48 localhost nova_compute[280804]: 2026-02-20 09:56:48.256 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:48 localhost ovn_controller[155916]: 2026-02-20T09:56:48Z|00227|binding|INFO|Removing iface tap1ad2bca8-eb ovn-installed in OVS Feb 20 04:56:48 localhost ovn_controller[155916]: 2026-02-20T09:56:48Z|00228|binding|INFO|Removing lport 1ad2bca8-eb88-428b-85c9-ec3a36819749 ovn-installed in OVS Feb 20 04:56:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:48.381 161766 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port f37ce690-5fa1-44a6-b685-d710cacbb00b with type ""#033[00m Feb 20 04:56:48 localhost nova_compute[280804]: 2026-02-20 09:56:48.382 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:48.384 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-1b263d89-a9bd-4e8c-ba1c-797a615fed4b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b263d89-a9bd-4e8c-ba1c-797a615fed4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12ff90ec-c935-4548-9bd2-f97ed2a11db4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1ad2bca8-eb88-428b-85c9-ec3a36819749) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:48.387 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 1ad2bca8-eb88-428b-85c9-ec3a36819749 in datapath 1b263d89-a9bd-4e8c-ba1c-797a615fed4b unbound from our chassis#033[00m Feb 20 04:56:48 localhost nova_compute[280804]: 2026-02-20 09:56:48.390 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:48.391 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1b263d89-a9bd-4e8c-ba1c-797a615fed4b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:48 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:48.393 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[15fa15a4-ebcf-47cc-bab4-54cfdbdb608f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:48 localhost dnsmasq[321389]: exiting on receipt of SIGTERM Feb 20 04:56:48 localhost podman[321599]: 2026-02-20 09:56:48.42869811 +0000 UTC m=+0.058570518 container kill 0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b263d89-a9bd-4e8c-ba1c-797a615fed4b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:56:48 localhost systemd[1]: libpod-0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311.scope: Deactivated successfully. Feb 20 04:56:48 localhost podman[321613]: 2026-02-20 09:56:48.509969539 +0000 UTC m=+0.059487251 container died 0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b263d89-a9bd-4e8c-ba1c-797a615fed4b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:56:48 localhost systemd[1]: tmp-crun.xclfxs.mount: Deactivated successfully. Feb 20 04:56:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:56:48 localhost podman[321613]: 2026-02-20 09:56:48.609281728 +0000 UTC m=+0.158799400 container remove 0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b263d89-a9bd-4e8c-ba1c-797a615fed4b, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:56:48 localhost systemd[1]: libpod-conmon-0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311.scope: Deactivated successfully. Feb 20 04:56:48 localhost nova_compute[280804]: 2026-02-20 09:56:48.620 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:48 localhost kernel: device tap1ad2bca8-eb left promiscuous mode Feb 20 04:56:48 localhost podman[321637]: 2026-02-20 09:56:48.627632876 +0000 UTC m=+0.072711893 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:56:48 localhost nova_compute[280804]: 2026-02-20 09:56:48.634 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:48 localhost podman[321637]: 2026-02-20 09:56:48.640737865 +0000 UTC m=+0.085816822 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:56:48 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:56:48 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:48.665 263745 INFO neutron.agent.dhcp.agent [None req-3dce8e64-2696-4db8-adda-779a51861f81 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:48 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:56:48.666 263745 INFO neutron.agent.dhcp.agent [None req-3dce8e64-2696-4db8-adda-779a51861f81 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:56:48 localhost systemd[1]: var-lib-containers-storage-overlay-34d48b6ad595004840b9114e53be8c32f097f8b519937650645d24d09267e044-merged.mount: Deactivated successfully. Feb 20 04:56:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0c225b016586cb69a59a21f4b3f8d4f5ed16e9a97629861bc1a9d178cb5a3311-userdata-shm.mount: Deactivated successfully. Feb 20 04:56:48 localhost systemd[1]: run-netns-qdhcp\x2d1b263d89\x2da9bd\x2d4e8c\x2dba1c\x2d797a615fed4b.mount: Deactivated successfully. Feb 20 04:56:48 localhost nova_compute[280804]: 2026-02-20 09:56:48.953 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:49 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "f96ea30b-5993-4393-8f64-efd08707fd5f", "format": "json"}]: dispatch Feb 20 04:56:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f96ea30b-5993-4393-8f64-efd08707fd5f, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f96ea30b-5993-4393-8f64-efd08707fd5f, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 167 MiB data, 825 MiB used, 41 GiB / 42 GiB avail; 3.1 MiB/s rd, 892 KiB/s wr, 95 op/s Feb 20 04:56:49 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "snap_name": "da20d4b5-2abc-49bc-a78c-39ce3cdadf16_977615dd-c728-40c5-bc37-c455d4274398", "force": true, "format": "json"}]: dispatch Feb 20 04:56:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da20d4b5-2abc-49bc-a78c-39ce3cdadf16_977615dd-c728-40c5-bc37-c455d4274398, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, vol_name:cephfs) < "" Feb 20 04:56:49 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta.tmp' Feb 20 04:56:49 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta.tmp' to config b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta' Feb 20 04:56:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da20d4b5-2abc-49bc-a78c-39ce3cdadf16_977615dd-c728-40c5-bc37-c455d4274398, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, vol_name:cephfs) < "" Feb 20 04:56:49 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "snap_name": "da20d4b5-2abc-49bc-a78c-39ce3cdadf16", "force": true, "format": "json"}]: dispatch Feb 20 04:56:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da20d4b5-2abc-49bc-a78c-39ce3cdadf16, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, vol_name:cephfs) < "" Feb 20 04:56:49 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta.tmp' Feb 20 04:56:49 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta.tmp' to config b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96/.meta' Feb 20 04:56:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da20d4b5-2abc-49bc-a78c-39ce3cdadf16, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, vol_name:cephfs) < "" Feb 20 04:56:50 localhost nova_compute[280804]: 2026-02-20 09:56:50.055 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:50 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:50.728 2 INFO neutron.agent.securitygroups_rpc [None req-6b7b9439-974f-45e5-a614-5a8be0850c72 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:51 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:51.312 2 INFO neutron.agent.securitygroups_rpc [None req-780282fe-fede-41a8-980f-26511e126244 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v326: 177 pgs: 177 active+clean; 203 MiB data, 859 MiB used, 41 GiB / 42 GiB avail; 4.8 MiB/s rd, 2.6 MiB/s wr, 146 op/s Feb 20 04:56:51 localhost sshd[321662]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:56:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e160 do_prune osdmap full prune enabled Feb 20 04:56:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e161 e161: 6 total, 6 up, 6 in Feb 20 04:56:51 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e161: 6 total, 6 up, 6 in Feb 20 04:56:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:56:52 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4023810761' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:56:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:56:52 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4023810761' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:56:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:56:52 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "2b637ee9-db13-447d-b623-b978babd5cfe", "format": "json"}]: dispatch Feb 20 04:56:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2b637ee9-db13-447d-b623-b978babd5cfe, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2b637ee9-db13-447d-b623-b978babd5cfe, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:53 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "format": "json"}]: dispatch Feb 20 04:56:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:56:53 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:56:53.023+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5d6cbe8b-e61d-44af-be13-78bc30a91a96' of type subvolume Feb 20 04:56:53 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5d6cbe8b-e61d-44af-be13-78bc30a91a96' of type subvolume Feb 20 04:56:53 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "force": true, "format": "json"}]: dispatch Feb 20 04:56:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, vol_name:cephfs) < "" Feb 20 04:56:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5d6cbe8b-e61d-44af-be13-78bc30a91a96'' moved to trashcan Feb 20 04:56:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:56:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5d6cbe8b-e61d-44af-be13-78bc30a91a96, vol_name:cephfs) < "" Feb 20 04:56:53 localhost nova_compute[280804]: 2026-02-20 09:56:53.259 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:56:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:56:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:56:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:56:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:56:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:56:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v328: 177 pgs: 177 active+clean; 203 MiB data, 859 MiB used, 41 GiB / 42 GiB avail; 4.2 MiB/s rd, 2.8 MiB/s wr, 121 op/s Feb 20 04:56:53 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:53.866 2 INFO neutron.agent.securitygroups_rpc [None req-db944330-da94-4d69-be05-7c7a8491a44e 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0. Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:53.935717) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52 Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581413935867, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2432, "num_deletes": 265, "total_data_size": 3383793, "memory_usage": 3428064, "flush_reason": "Manual Compaction"} Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581413955442, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3309686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27784, "largest_seqno": 30215, "table_properties": {"data_size": 3299048, "index_size": 6823, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23780, "raw_average_key_size": 22, "raw_value_size": 3277368, "raw_average_value_size": 3043, "num_data_blocks": 288, "num_entries": 1077, "num_filter_entries": 1077, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581267, "oldest_key_time": 1771581267, "file_creation_time": 1771581413, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}} Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 19719 microseconds, and 9082 cpu microseconds. Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:53.955504) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3309686 bytes OK Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:53.955533) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:53.957323) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:53.957350) EVENT_LOG_v1 {"time_micros": 1771581413957342, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:53.957398) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3373308, prev total WAL file size 3373308, number of live WAL files 2. Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:53.958296) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end) Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3232KB)], [51(15MB)] Feb 20 04:56:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581413958350, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 19223584, "oldest_snapshot_seqno": -1} Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 12730 keys, 17974372 bytes, temperature: kUnknown Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581414040817, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 17974372, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17900701, "index_size": 40738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31877, "raw_key_size": 340478, "raw_average_key_size": 26, "raw_value_size": 17683046, "raw_average_value_size": 1389, "num_data_blocks": 1552, "num_entries": 12730, "num_filter_entries": 12730, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771581413, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}} Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:54.041103) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 17974372 bytes Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:54.042482) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 232.8 rd, 217.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 15.2 +0.0 blob) out(17.1 +0.0 blob), read-write-amplify(11.2) write-amplify(5.4) OK, records in: 13278, records dropped: 548 output_compression: NoCompression Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:54.042500) EVENT_LOG_v1 {"time_micros": 1771581414042492, "job": 30, "event": "compaction_finished", "compaction_time_micros": 82565, "compaction_time_cpu_micros": 50531, "output_level": 6, "num_output_files": 1, "total_output_size": 17974372, "num_input_records": 13278, "num_output_records": 12730, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581414042925, "job": 30, "event": "table_file_deletion", "file_number": 53} Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581414044555, "job": 30, "event": "table_file_deletion", "file_number": 51} Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:53.958239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:54.044579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:54.044583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:54.044585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:54.044598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:56:54 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:56:54.044600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:56:54 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:54.776 2 INFO neutron.agent.securitygroups_rpc [None req-2eacea7e-ffc4-4411-bf47-38f06768c1ff f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e161 do_prune osdmap full prune enabled Feb 20 04:56:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e162 e162: 6 total, 6 up, 6 in Feb 20 04:56:54 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e162: 6 total, 6 up, 6 in Feb 20 04:56:55 localhost nova_compute[280804]: 2026-02-20 09:56:55.092 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:55 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:55.102 2 INFO neutron.agent.securitygroups_rpc [None req-fc9a7b92-030e-427f-b2ca-50d327cf6718 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 4.4 MiB/s rd, 5.6 MiB/s wr, 243 op/s Feb 20 04:56:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e162 do_prune osdmap full prune enabled Feb 20 04:56:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e163 e163: 6 total, 6 up, 6 in Feb 20 04:56:55 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e163: 6 total, 6 up, 6 in Feb 20 04:56:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "0b99d555-2c83-4adc-81b2-c57674191232", "format": "json"}]: dispatch Feb 20 04:56:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0b99d555-2c83-4adc-81b2-c57674191232, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0b99d555-2c83-4adc-81b2-c57674191232, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e163 do_prune osdmap full prune enabled Feb 20 04:56:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e164 e164: 6 total, 6 up, 6 in Feb 20 04:56:57 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e164: 6 total, 6 up, 6 in Feb 20 04:56:57 localhost nova_compute[280804]: 2026-02-20 09:56:57.229 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:57 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:57.230 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:57 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:57.232 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 04:56:57 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:57.271 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 2001:db8:0:1:f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:56:57 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:57.272 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated#033[00m Feb 20 04:56:57 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:57.274 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:56:57 localhost ovn_metadata_agent[161761]: 2026-02-20 09:56:57.276 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[9dbb36d4-dcfa-40a8-8b30-44ba22fc3f44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:56:57 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:57.483 2 INFO neutron.agent.securitygroups_rpc [None req-8da4750a-6dfd-49c6-9c80-6371938bf016 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:56:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 107 KiB/s rd, 3.6 MiB/s wr, 159 op/s Feb 20 04:56:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:56:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:56:58 localhost podman[321665]: 2026-02-20 09:56:58.004239787 +0000 UTC m=+0.090870765 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:56:58 localhost podman[321665]: 2026-02-20 09:56:58.013878523 +0000 UTC m=+0.100509571 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:56:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e164 do_prune osdmap full prune enabled Feb 20 04:56:58 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:56:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e165 e165: 6 total, 6 up, 6 in Feb 20 04:56:58 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e165: 6 total, 6 up, 6 in Feb 20 04:56:58 localhost openstack_network_exporter[243776]: ERROR 09:56:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:56:58 localhost openstack_network_exporter[243776]: Feb 20 04:56:58 localhost openstack_network_exporter[243776]: ERROR 09:56:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:56:58 localhost openstack_network_exporter[243776]: Feb 20 04:56:58 localhost nova_compute[280804]: 2026-02-20 09:56:58.261 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:56:58 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:58.260 2 INFO neutron.agent.securitygroups_rpc [None req-23120728-b18f-4316-9b23-0bff49b361e7 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:59 localhost neutron_sriov_agent[256551]: 2026-02-20 09:56:59.045 2 INFO neutron.agent.securitygroups_rpc [None req-8dad7750-521e-4bcf-b6a0-3ff2b8fade37 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:56:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 96 KiB/s rd, 13 KiB/s wr, 124 op/s Feb 20 04:56:59 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:56:59 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2757870375' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:56:59 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:56:59 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2757870375' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:56:59 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "0b99d555-2c83-4adc-81b2-c57674191232_67b12e3d-cabb-4489-8f7a-b787cf53ee59", "force": true, "format": "json"}]: dispatch Feb 20 04:56:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0b99d555-2c83-4adc-81b2-c57674191232_67b12e3d-cabb-4489-8f7a-b787cf53ee59, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:56:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:56:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:56:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0b99d555-2c83-4adc-81b2-c57674191232_67b12e3d-cabb-4489-8f7a-b787cf53ee59, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:00 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "0b99d555-2c83-4adc-81b2-c57674191232", "force": true, "format": "json"}]: dispatch Feb 20 04:57:00 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0b99d555-2c83-4adc-81b2-c57674191232, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:00 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:57:00 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:57:00 localhost nova_compute[280804]: 2026-02-20 09:57:00.096 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:00 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0b99d555-2c83-4adc-81b2-c57674191232, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:00 localhost nova_compute[280804]: 2026-02-20 09:57:00.526 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:57:00 localhost nova_compute[280804]: 2026-02-20 09:57:00.527 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:57:00 localhost nova_compute[280804]: 2026-02-20 09:57:00.527 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:57:00 localhost nova_compute[280804]: 2026-02-20 09:57:00.542 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:57:00 localhost nova_compute[280804]: 2026-02-20 09:57:00.542 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:57:00 localhost nova_compute[280804]: 2026-02-20 09:57:00.543 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:57:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v336: 177 pgs: 177 active+clean; 193 MiB data, 917 MiB used, 41 GiB / 42 GiB avail; 148 KiB/s rd, 14 KiB/s wr, 199 op/s Feb 20 04:57:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:57:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3998727060' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:57:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:57:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3998727060' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:57:02 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:02.230 263745 INFO neutron.agent.linux.ip_lib [None req-08df36eb-c57d-4bad-b0de-0ad6022af353 - - - - - -] Device tapd4de62b2-b6 cannot be used as it has no MAC address#033[00m Feb 20 04:57:02 localhost nova_compute[280804]: 2026-02-20 09:57:02.261 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:02 localhost kernel: device tapd4de62b2-b6 entered promiscuous mode Feb 20 04:57:02 localhost NetworkManager[5967]: [1771581422.2741] manager: (tapd4de62b2-b6): new Generic device (/org/freedesktop/NetworkManager/Devices/46) Feb 20 04:57:02 localhost nova_compute[280804]: 2026-02-20 09:57:02.273 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:02 localhost ovn_controller[155916]: 2026-02-20T09:57:02Z|00229|binding|INFO|Claiming lport d4de62b2-b6cb-4b77-af79-1237680b2f9a for this chassis. Feb 20 04:57:02 localhost ovn_controller[155916]: 2026-02-20T09:57:02Z|00230|binding|INFO|d4de62b2-b6cb-4b77-af79-1237680b2f9a: Claiming unknown Feb 20 04:57:02 localhost systemd-udevd[321698]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:57:02 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:02.289 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec4f10d3-21c0-4f32-97fb-37b6b004b601, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=d4de62b2-b6cb-4b77-af79-1237680b2f9a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:57:02 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:02.291 161766 INFO neutron.agent.ovn.metadata.agent [-] Port d4de62b2-b6cb-4b77-af79-1237680b2f9a in datapath 039b20b8-16a8-495e-968a-63fcd66a566c bound to our chassis#033[00m Feb 20 04:57:02 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:02.294 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port 1b8aed8a-4745-4b4e-915f-cd721fc6c778 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:57:02 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:02.294 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 039b20b8-16a8-495e-968a-63fcd66a566c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:57:02 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:02.296 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[f89d150c-f672-4d04-94ce-0fb1f7434ede]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:57:02 localhost journal[229367]: ethtool ioctl error on tapd4de62b2-b6: No such device Feb 20 04:57:02 localhost journal[229367]: ethtool ioctl error on tapd4de62b2-b6: No such device Feb 20 04:57:02 localhost journal[229367]: ethtool ioctl error on tapd4de62b2-b6: No such device Feb 20 04:57:02 localhost nova_compute[280804]: 2026-02-20 09:57:02.314 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:02 localhost journal[229367]: ethtool ioctl error on tapd4de62b2-b6: No such device Feb 20 04:57:02 localhost ovn_controller[155916]: 2026-02-20T09:57:02Z|00231|binding|INFO|Setting lport d4de62b2-b6cb-4b77-af79-1237680b2f9a ovn-installed in OVS Feb 20 04:57:02 localhost ovn_controller[155916]: 2026-02-20T09:57:02Z|00232|binding|INFO|Setting lport d4de62b2-b6cb-4b77-af79-1237680b2f9a up in Southbound Feb 20 04:57:02 localhost nova_compute[280804]: 2026-02-20 09:57:02.321 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:02 localhost journal[229367]: ethtool ioctl error on tapd4de62b2-b6: No such device Feb 20 04:57:02 localhost journal[229367]: ethtool ioctl error on tapd4de62b2-b6: No such device Feb 20 04:57:02 localhost journal[229367]: ethtool ioctl error on tapd4de62b2-b6: No such device Feb 20 04:57:02 localhost journal[229367]: ethtool ioctl error on tapd4de62b2-b6: No such device Feb 20 04:57:02 localhost nova_compute[280804]: 2026-02-20 09:57:02.343 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:02 localhost nova_compute[280804]: 2026-02-20 09:57:02.376 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:02 localhost nova_compute[280804]: 2026-02-20 09:57:02.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:57:02 localhost nova_compute[280804]: 2026-02-20 09:57:02.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:57:02 localhost nova_compute[280804]: 2026-02-20 09:57:02.512 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:57:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:57:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e165 do_prune osdmap full prune enabled Feb 20 04:57:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e166 e166: 6 total, 6 up, 6 in Feb 20 04:57:02 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e166: 6 total, 6 up, 6 in Feb 20 04:57:03 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:03.041 2 INFO neutron.agent.securitygroups_rpc [None req-f7f4080a-576f-4ec0-afc4-2b369a4e24bc 90a02ec8973644daaf9f628e26b82aba 68587c4c15964f28ad6d155288e119b0 - - default default] Security group rule updated ['602964d2-c9d4-4795-879d-2f4697b07a9a']#033[00m Feb 20 04:57:03 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "2b637ee9-db13-447d-b623-b978babd5cfe_82c45f12-897c-4165-a7b7-4039d7d47e93", "force": true, "format": "json"}]: dispatch Feb 20 04:57:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b637ee9-db13-447d-b623-b978babd5cfe_82c45f12-897c-4165-a7b7-4039d7d47e93, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:03 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:57:03 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:57:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b637ee9-db13-447d-b623-b978babd5cfe_82c45f12-897c-4165-a7b7-4039d7d47e93, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:03 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "2b637ee9-db13-447d-b623-b978babd5cfe", "force": true, "format": "json"}]: dispatch Feb 20 04:57:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b637ee9-db13-447d-b623-b978babd5cfe, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:03 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:57:03 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:57:03 localhost podman[321770]: Feb 20 04:57:03 localhost nova_compute[280804]: 2026-02-20 09:57:03.269 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:03 localhost podman[321770]: 2026-02-20 09:57:03.278209979 +0000 UTC m=+0.094034621 container create 6ce099a86f5a37e53451dc168b901e107648a82c2963f6886b541439bdc00559 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:57:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b637ee9-db13-447d-b623-b978babd5cfe, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:03 localhost systemd[1]: Started libpod-conmon-6ce099a86f5a37e53451dc168b901e107648a82c2963f6886b541439bdc00559.scope. Feb 20 04:57:03 localhost podman[321770]: 2026-02-20 09:57:03.22183171 +0000 UTC m=+0.037656412 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:57:03 localhost systemd[1]: tmp-crun.kK6a2z.mount: Deactivated successfully. Feb 20 04:57:03 localhost systemd[1]: Started libcrun container. Feb 20 04:57:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3404218bb68f9c28f13b7d41045b2ea056207d4d490dd8777c7a73ad23295011/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:57:03 localhost podman[321770]: 2026-02-20 09:57:03.361920613 +0000 UTC m=+0.177745255 container init 6ce099a86f5a37e53451dc168b901e107648a82c2963f6886b541439bdc00559 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 04:57:03 localhost podman[321770]: 2026-02-20 09:57:03.373546862 +0000 UTC m=+0.189371504 container start 6ce099a86f5a37e53451dc168b901e107648a82c2963f6886b541439bdc00559 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:57:03 localhost dnsmasq[321788]: started, version 2.85 cachesize 150 Feb 20 04:57:03 localhost dnsmasq[321788]: DNS service limited to local subnets Feb 20 04:57:03 localhost dnsmasq[321788]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:57:03 localhost dnsmasq[321788]: warning: no upstream servers configured Feb 20 04:57:03 localhost dnsmasq-dhcp[321788]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:57:03 localhost dnsmasq[321788]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/addn_hosts - 0 addresses Feb 20 04:57:03 localhost dnsmasq-dhcp[321788]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/host Feb 20 04:57:03 localhost dnsmasq-dhcp[321788]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/opts Feb 20 04:57:03 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:03.538 263745 INFO neutron.agent.dhcp.agent [None req-c99a4c27-88ea-4ae5-adb5-9184fdcda22c - - - - - -] DHCP configuration for ports {'5d8efbfc-f220-4440-9b71-5ab21a9bb5f9', '3568ed7b-9263-43af-b4fd-ae333afc9a3b'} is completed#033[00m Feb 20 04:57:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 193 MiB data, 917 MiB used, 41 GiB / 42 GiB avail; 136 KiB/s rd, 13 KiB/s wr, 183 op/s Feb 20 04:57:03 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:03.614 2 INFO neutron.agent.securitygroups_rpc [None req-7415ebd2-08fb-4812-9524-708fe60e5aaa 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['efc53d5c-88f6-4ec9-8815-9d765811b12e']#033[00m Feb 20 04:57:03 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:03.662 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:57:03Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=141f1d4f-7cde-4b74-bda8-feba92b9c128, ip_allocation=immediate, mac_address=fa:16:3e:cb:ea:b8, name=tempest-PortsTestJSON-176885475, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:54Z, description=, dns_domain=, id=039b20b8-16a8-495e-968a-63fcd66a566c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-638300476, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=57897, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2202, status=ACTIVE, subnets=['184a1b0f-397c-4ca0-96d1-cf7c41eb214a'], tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:57:00Z, vlan_transparent=None, network_id=039b20b8-16a8-495e-968a-63fcd66a566c, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['efc53d5c-88f6-4ec9-8815-9d765811b12e'], standard_attr_id=2459, status=DOWN, tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:57:03Z on network 039b20b8-16a8-495e-968a-63fcd66a566c#033[00m Feb 20 04:57:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e166 do_prune osdmap full prune enabled Feb 20 04:57:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e167 e167: 6 total, 6 up, 6 in Feb 20 04:57:03 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e167: 6 total, 6 up, 6 in Feb 20 04:57:03 localhost dnsmasq[321788]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/addn_hosts - 1 addresses Feb 20 04:57:03 localhost dnsmasq-dhcp[321788]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/host Feb 20 04:57:03 localhost dnsmasq-dhcp[321788]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/opts Feb 20 04:57:03 localhost podman[321804]: 2026-02-20 09:57:03.89389582 +0000 UTC m=+0.067495264 container kill 6ce099a86f5a37e53451dc168b901e107648a82c2963f6886b541439bdc00559 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 20 04:57:03 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:03.995 2 INFO neutron.agent.securitygroups_rpc [None req-f1e881f7-c612-45f2-b27b-1f5d8fe2e21f f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:57:04 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:04.176 263745 INFO neutron.agent.dhcp.agent [None req-79f0be27-2270-49d7-b540-2c3a4ac1407c - - - - - -] DHCP configuration for ports {'141f1d4f-7cde-4b74-bda8-feba92b9c128'} is completed#033[00m Feb 20 04:57:04 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:04.442 2 INFO neutron.agent.securitygroups_rpc [None req-0e7411d4-9e8e-44df-aee4-9bd8dc94f75f f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']#033[00m Feb 20 04:57:04 localhost nova_compute[280804]: 2026-02-20 09:57:04.508 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:57:04 localhost nova_compute[280804]: 2026-02-20 09:57:04.509 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:57:04 localhost nova_compute[280804]: 2026-02-20 09:57:04.532 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:57:04 localhost nova_compute[280804]: 2026-02-20 09:57:04.533 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:57:04 localhost nova_compute[280804]: 2026-02-20 09:57:04.533 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:57:04 localhost nova_compute[280804]: 2026-02-20 09:57:04.533 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:57:04 localhost nova_compute[280804]: 2026-02-20 09:57:04.534 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:57:04 localhost dnsmasq[321788]: exiting on receipt of SIGTERM Feb 20 04:57:04 localhost podman[321863]: 2026-02-20 09:57:04.8082441 +0000 UTC m=+0.060171880 container kill 6ce099a86f5a37e53451dc168b901e107648a82c2963f6886b541439bdc00559 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:57:04 localhost systemd[1]: libpod-6ce099a86f5a37e53451dc168b901e107648a82c2963f6886b541439bdc00559.scope: Deactivated successfully. Feb 20 04:57:04 localhost podman[321875]: 2026-02-20 09:57:04.887023774 +0000 UTC m=+0.066303584 container died 6ce099a86f5a37e53451dc168b901e107648a82c2963f6886b541439bdc00559 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2) Feb 20 04:57:04 localhost systemd[1]: tmp-crun.p3f6eC.mount: Deactivated successfully. Feb 20 04:57:04 localhost podman[321875]: 2026-02-20 09:57:04.924869349 +0000 UTC m=+0.104149179 container cleanup 6ce099a86f5a37e53451dc168b901e107648a82c2963f6886b541439bdc00559 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:57:04 localhost systemd[1]: libpod-conmon-6ce099a86f5a37e53451dc168b901e107648a82c2963f6886b541439bdc00559.scope: Deactivated successfully. Feb 20 04:57:04 localhost podman[321877]: 2026-02-20 09:57:04.970057371 +0000 UTC m=+0.138426101 container remove 6ce099a86f5a37e53451dc168b901e107648a82c2963f6886b541439bdc00559 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 20 04:57:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:57:04 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2301409396' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:57:04 localhost nova_compute[280804]: 2026-02-20 09:57:04.992 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.097 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.163 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.164 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11523MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.164 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.164 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:57:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:05.189 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:6d:09 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec4f10d3-21c0-4f32-97fb-37b6b004b601, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3568ed7b-9263-43af-b4fd-ae333afc9a3b) old=Port_Binding(mac=['fa:16:3e:68:6d:09 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:57:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:05.191 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3568ed7b-9263-43af-b4fd-ae333afc9a3b in datapath 039b20b8-16a8-495e-968a-63fcd66a566c updated#033[00m Feb 20 04:57:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:05.193 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port 1b8aed8a-4745-4b4e-915f-cd721fc6c778 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:57:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:05.193 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 039b20b8-16a8-495e-968a-63fcd66a566c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:57:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:05.194 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[0e2e6db7-f5cc-4de6-aa27-d11528b96037]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.223 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.224 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.240 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:57:05 localhost systemd[1]: var-lib-containers-storage-overlay-3404218bb68f9c28f13b7d41045b2ea056207d4d490dd8777c7a73ad23295011-merged.mount: Deactivated successfully. Feb 20 04:57:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ce099a86f5a37e53451dc168b901e107648a82c2963f6886b541439bdc00559-userdata-shm.mount: Deactivated successfully. Feb 20 04:57:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 193 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 139 KiB/s rd, 51 KiB/s wr, 190 op/s Feb 20 04:57:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:57:05 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4294664811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.680 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.689 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.707 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.711 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:57:05 localhost nova_compute[280804]: 2026-02-20 09:57:05.711 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:57:05 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:05.916 2 INFO neutron.agent.securitygroups_rpc [None req-e5b5b6c2-ec9f-4e00-8939-9097e06787f1 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['c1686fb3-a7b5-4191-9c5e-7c249c4e6c3c', 'efc53d5c-88f6-4ec9-8815-9d765811b12e']#033[00m Feb 20 04:57:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:05.922 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:57:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:05.922 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:57:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:05.922 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:57:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, vol_name:cephfs) < "" Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c/.meta.tmp' Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c/.meta.tmp' to config b'/volumes/_nogroup/56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c/.meta' Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, vol_name:cephfs) < "" Feb 20 04:57:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "format": "json"}]: dispatch Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, vol_name:cephfs) < "" Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, vol_name:cephfs) < "" Feb 20 04:57:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "f96ea30b-5993-4393-8f64-efd08707fd5f_ff9c641b-ea0b-431f-af91-e17e9c0dd44a", "force": true, "format": "json"}]: dispatch Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f96ea30b-5993-4393-8f64-efd08707fd5f_ff9c641b-ea0b-431f-af91-e17e9c0dd44a, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:06 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:06.389 2 INFO neutron.agent.securitygroups_rpc [None req-15e35e99-a929-4578-aaaf-c6b96452307f 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['c1686fb3-a7b5-4191-9c5e-7c249c4e6c3c']#033[00m Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f96ea30b-5993-4393-8f64-efd08707fd5f_ff9c641b-ea0b-431f-af91-e17e9c0dd44a, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "f96ea30b-5993-4393-8f64-efd08707fd5f", "force": true, "format": "json"}]: dispatch Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f96ea30b-5993-4393-8f64-efd08707fd5f, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:57:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f96ea30b-5993-4393-8f64-efd08707fd5f, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:06 localhost podman[321975]: Feb 20 04:57:06 localhost podman[321975]: 2026-02-20 09:57:06.544727099 +0000 UTC m=+0.078531068 container create 22a96581c257fcc7b9eb00a6c01795023d06e91bff01a7f2a6d9385b15aec6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 04:57:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:57:06 localhost systemd[1]: Started libpod-conmon-22a96581c257fcc7b9eb00a6c01795023d06e91bff01a7f2a6d9385b15aec6e4.scope. Feb 20 04:57:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:57:06 localhost podman[321975]: 2026-02-20 09:57:06.506094472 +0000 UTC m=+0.039898491 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:57:06 localhost systemd[1]: tmp-crun.aCVyxK.mount: Deactivated successfully. Feb 20 04:57:06 localhost systemd[1]: Started libcrun container. Feb 20 04:57:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83dd873f8ebf602bf007d032221e8e152039c79ac12266d30a5ea151eb781554/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:57:06 localhost podman[321975]: 2026-02-20 09:57:06.643320879 +0000 UTC m=+0.177124848 container init 22a96581c257fcc7b9eb00a6c01795023d06e91bff01a7f2a6d9385b15aec6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 04:57:06 localhost podman[321975]: 2026-02-20 09:57:06.653439038 +0000 UTC m=+0.187243007 container start 22a96581c257fcc7b9eb00a6c01795023d06e91bff01a7f2a6d9385b15aec6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:57:06 localhost dnsmasq[322017]: started, version 2.85 cachesize 150 Feb 20 04:57:06 localhost dnsmasq[322017]: DNS service limited to local subnets Feb 20 04:57:06 localhost dnsmasq[322017]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:57:06 localhost dnsmasq[322017]: warning: no upstream servers configured Feb 20 04:57:06 localhost dnsmasq-dhcp[322017]: DHCP, static leases only on 10.100.0.16, lease time 1d Feb 20 04:57:06 localhost dnsmasq-dhcp[322017]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:57:06 localhost dnsmasq[322017]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/addn_hosts - 1 addresses Feb 20 04:57:06 localhost dnsmasq-dhcp[322017]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/host Feb 20 04:57:06 localhost dnsmasq-dhcp[322017]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/opts Feb 20 04:57:06 localhost podman[321993]: 2026-02-20 09:57:06.695058644 +0000 UTC m=+0.089484030 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:57:06 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:06.706 263745 INFO neutron.agent.dhcp.agent [None req-3d096aeb-2c11-4bfd-b4c3-1676a9814060 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:57:03Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=141f1d4f-7cde-4b74-bda8-feba92b9c128, ip_allocation=immediate, mac_address=fa:16:3e:cb:ea:b8, name=tempest-PortsTestJSON-1036550654, network_id=039b20b8-16a8-495e-968a-63fcd66a566c, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['c1686fb3-a7b5-4191-9c5e-7c249c4e6c3c'], standard_attr_id=2459, status=DOWN, tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:57:05Z on network 039b20b8-16a8-495e-968a-63fcd66a566c#033[00m Feb 20 04:57:06 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:06.711 263745 INFO oslo.privsep.daemon [None req-3d096aeb-2c11-4bfd-b4c3-1676a9814060 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.dhcp_release_cmd', '--privsep_sock_path', '/tmp/tmpb10dp9lv/privsep.sock']#033[00m Feb 20 04:57:06 localhost podman[321993]: 2026-02-20 09:57:06.739820293 +0000 UTC m=+0.134245679 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 20 04:57:06 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:57:06 localhost podman[321990]: 2026-02-20 09:57:06.779919169 +0000 UTC m=+0.178112004 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal) Feb 20 04:57:06 localhost podman[321990]: 2026-02-20 09:57:06.795830022 +0000 UTC m=+0.194022847 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible) Feb 20 04:57:06 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:57:06 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:06.974 263745 INFO neutron.agent.dhcp.agent [None req-abbdf3aa-39cc-467b-986f-782251e43561 - - - - - -] DHCP configuration for ports {'d4de62b2-b6cb-4b77-af79-1237680b2f9a', '141f1d4f-7cde-4b74-bda8-feba92b9c128', '5d8efbfc-f220-4440-9b71-5ab21a9bb5f9', '3568ed7b-9263-43af-b4fd-ae333afc9a3b'} is completed#033[00m Feb 20 04:57:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e167 do_prune osdmap full prune enabled Feb 20 04:57:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:57:07 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2989862322' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:57:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:57:07 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2989862322' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:57:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e168 e168: 6 total, 6 up, 6 in Feb 20 04:57:07 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e168: 6 total, 6 up, 6 in Feb 20 04:57:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:07.234 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:57:07 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:07.390 263745 INFO oslo.privsep.daemon [None req-3d096aeb-2c11-4bfd-b4c3-1676a9814060 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Feb 20 04:57:07 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:07.301 322037 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 20 04:57:07 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:07.306 322037 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 20 04:57:07 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:07.309 322037 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Feb 20 04:57:07 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:07.310 322037 INFO oslo.privsep.daemon [-] privsep daemon running as pid 322037#033[00m Feb 20 04:57:07 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:57:07 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 193 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 50 KiB/s wr, 37 op/s Feb 20 04:57:07 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/.meta.tmp' Feb 20 04:57:07 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/.meta.tmp' to config b'/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/.meta' Feb 20 04:57:07 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:07 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "format": "json"}]: dispatch Feb 20 04:57:07 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:07 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:07 localhost dnsmasq-dhcp[322017]: DHCPRELEASE(tapd4de62b2-b6) 10.100.0.11 fa:16:3e:cb:ea:b8 Feb 20 04:57:07 localhost nova_compute[280804]: 2026-02-20 09:57:07.714 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:57:07 localhost nova_compute[280804]: 2026-02-20 09:57:07.714 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:57:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:57:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e168 do_prune osdmap full prune enabled Feb 20 04:57:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e169 e169: 6 total, 6 up, 6 in Feb 20 04:57:08 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e169: 6 total, 6 up, 6 in Feb 20 04:57:08 localhost podman[322059]: 2026-02-20 09:57:08.195729905 +0000 UTC m=+0.067754151 container kill 22a96581c257fcc7b9eb00a6c01795023d06e91bff01a7f2a6d9385b15aec6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:57:08 localhost systemd[1]: tmp-crun.gjjVwG.mount: Deactivated successfully. Feb 20 04:57:08 localhost dnsmasq[322017]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/addn_hosts - 1 addresses Feb 20 04:57:08 localhost dnsmasq-dhcp[322017]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/host Feb 20 04:57:08 localhost dnsmasq-dhcp[322017]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/opts Feb 20 04:57:08 localhost nova_compute[280804]: 2026-02-20 09:57:08.312 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:08 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:08.417 263745 INFO neutron.agent.dhcp.agent [None req-d9146a54-5fa6-421e-90da-35f0ab20989f - - - - - -] DHCP configuration for ports {'141f1d4f-7cde-4b74-bda8-feba92b9c128'} is completed#033[00m Feb 20 04:57:08 localhost dnsmasq[322017]: exiting on receipt of SIGTERM Feb 20 04:57:08 localhost podman[322095]: 2026-02-20 09:57:08.641466991 +0000 UTC m=+0.072246551 container kill 22a96581c257fcc7b9eb00a6c01795023d06e91bff01a7f2a6d9385b15aec6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3) Feb 20 04:57:08 localhost systemd[1]: tmp-crun.0JyO7i.mount: Deactivated successfully. Feb 20 04:57:08 localhost systemd[1]: libpod-22a96581c257fcc7b9eb00a6c01795023d06e91bff01a7f2a6d9385b15aec6e4.scope: Deactivated successfully. Feb 20 04:57:08 localhost podman[322108]: 2026-02-20 09:57:08.710205518 +0000 UTC m=+0.055596008 container died 22a96581c257fcc7b9eb00a6c01795023d06e91bff01a7f2a6d9385b15aec6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:57:08 localhost podman[322108]: 2026-02-20 09:57:08.746090192 +0000 UTC m=+0.091480632 container cleanup 22a96581c257fcc7b9eb00a6c01795023d06e91bff01a7f2a6d9385b15aec6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:57:08 localhost systemd[1]: libpod-conmon-22a96581c257fcc7b9eb00a6c01795023d06e91bff01a7f2a6d9385b15aec6e4.scope: Deactivated successfully. Feb 20 04:57:08 localhost podman[322110]: 2026-02-20 09:57:08.798830343 +0000 UTC m=+0.132295987 container remove 22a96581c257fcc7b9eb00a6c01795023d06e91bff01a7f2a6d9385b15aec6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:57:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "snap_name": "faa1b419-44e8-41e3-a207-23a7805bf4c9", "format": "json"}]: dispatch Feb 20 04:57:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:faa1b419-44e8-41e3-a207-23a7805bf4c9, sub_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, vol_name:cephfs) < "" Feb 20 04:57:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:faa1b419-44e8-41e3-a207-23a7805bf4c9, sub_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, vol_name:cephfs) < "" Feb 20 04:57:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "877bc86e-3ae3-4788-a4cd-ecc7ec5a94e2_1280651e-fbca-4fea-b6ea-1b1c294293df", "force": true, "format": "json"}]: dispatch Feb 20 04:57:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:877bc86e-3ae3-4788-a4cd-ecc7ec5a94e2_1280651e-fbca-4fea-b6ea-1b1c294293df, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 88 KiB/s wr, 42 op/s Feb 20 04:57:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:57:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:57:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:877bc86e-3ae3-4788-a4cd-ecc7ec5a94e2_1280651e-fbca-4fea-b6ea-1b1c294293df, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "877bc86e-3ae3-4788-a4cd-ecc7ec5a94e2", "force": true, "format": "json"}]: dispatch Feb 20 04:57:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:877bc86e-3ae3-4788-a4cd-ecc7ec5a94e2, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:57:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:57:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:877bc86e-3ae3-4788-a4cd-ecc7ec5a94e2, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:09 localhost systemd[1]: var-lib-containers-storage-overlay-83dd873f8ebf602bf007d032221e8e152039c79ac12266d30a5ea151eb781554-merged.mount: Deactivated successfully. Feb 20 04:57:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22a96581c257fcc7b9eb00a6c01795023d06e91bff01a7f2a6d9385b15aec6e4-userdata-shm.mount: Deactivated successfully. Feb 20 04:57:09 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:09.636 2 INFO neutron.agent.securitygroups_rpc [None req-d187057a-39e5-4c52-82a0-d1bcafd46a90 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['46d5d21d-63a5-4d3d-a013-7b21b89cdba7']#033[00m Feb 20 04:57:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:57:09 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3137678042' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:57:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:57:09 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3137678042' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:57:10 localhost nova_compute[280804]: 2026-02-20 09:57:10.145 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:10 localhost podman[322185]: Feb 20 04:57:10 localhost podman[322185]: 2026-02-20 09:57:10.250332408 +0000 UTC m=+0.087233970 container create 10f0240a8d7ae45938ce2c5825dbaf92ab60b70a60e4931d6b25c152f35b1aae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:57:10 localhost systemd[1]: Started libpod-conmon-10f0240a8d7ae45938ce2c5825dbaf92ab60b70a60e4931d6b25c152f35b1aae.scope. Feb 20 04:57:10 localhost systemd[1]: Started libcrun container. Feb 20 04:57:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfce898854dcdff490ff54c951c70b36c8908f4aa65970f80dfe95103e7e6471/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:57:10 localhost podman[322185]: 2026-02-20 09:57:10.309150961 +0000 UTC m=+0.146052513 container init 10f0240a8d7ae45938ce2c5825dbaf92ab60b70a60e4931d6b25c152f35b1aae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 20 04:57:10 localhost podman[322185]: 2026-02-20 09:57:10.211862766 +0000 UTC m=+0.048764388 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:57:10 localhost dnsmasq[322203]: started, version 2.85 cachesize 150 Feb 20 04:57:10 localhost dnsmasq[322203]: DNS service limited to local subnets Feb 20 04:57:10 localhost dnsmasq[322203]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:57:10 localhost dnsmasq[322203]: warning: no upstream servers configured Feb 20 04:57:10 localhost dnsmasq-dhcp[322203]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:57:10 localhost dnsmasq-dhcp[322203]: DHCP, static leases only on 10.100.0.16, lease time 1d Feb 20 04:57:10 localhost dnsmasq[322203]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/addn_hosts - 0 addresses Feb 20 04:57:10 localhost dnsmasq-dhcp[322203]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/host Feb 20 04:57:10 localhost dnsmasq-dhcp[322203]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/opts Feb 20 04:57:10 localhost podman[322185]: 2026-02-20 09:57:10.320324818 +0000 UTC m=+0.157226400 container start 10f0240a8d7ae45938ce2c5825dbaf92ab60b70a60e4931d6b25c152f35b1aae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:57:10 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:10.358 263745 INFO neutron.agent.dhcp.agent [None req-7b482a08-3e58-4c40-8ebc-54e92991eb4f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:57:09Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=956b4edd-2866-461e-a334-e568ca3af226, ip_allocation=immediate, mac_address=fa:16:3e:54:73:cb, name=tempest-PortsTestJSON-1408687097, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:54Z, description=, dns_domain=, id=039b20b8-16a8-495e-968a-63fcd66a566c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-638300476, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=57897, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=2202, status=ACTIVE, subnets=['a837e17d-2630-45ea-9393-3907d6261152', 'aa8d6600-5ab0-4470-a62b-2a9f175d21e0'], tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:57:07Z, vlan_transparent=None, network_id=039b20b8-16a8-495e-968a-63fcd66a566c, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['46d5d21d-63a5-4d3d-a013-7b21b89cdba7'], standard_attr_id=2501, status=DOWN, tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:57:09Z on network 039b20b8-16a8-495e-968a-63fcd66a566c#033[00m Feb 20 04:57:10 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:10.587 263745 INFO neutron.agent.dhcp.agent [None req-babffd80-2fca-4a7c-8fc3-ae42dcb24508 - - - - - -] DHCP configuration for ports {'d4de62b2-b6cb-4b77-af79-1237680b2f9a', '5d8efbfc-f220-4440-9b71-5ab21a9bb5f9', '3568ed7b-9263-43af-b4fd-ae333afc9a3b'} is completed#033[00m Feb 20 04:57:10 localhost dnsmasq[322203]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/addn_hosts - 1 addresses Feb 20 04:57:10 localhost dnsmasq-dhcp[322203]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/host Feb 20 04:57:10 localhost podman[322221]: 2026-02-20 09:57:10.628410575 +0000 UTC m=+0.069387624 container kill 10f0240a8d7ae45938ce2c5825dbaf92ab60b70a60e4931d6b25c152f35b1aae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:57:10 localhost dnsmasq-dhcp[322203]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/opts Feb 20 04:57:10 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:10.855 263745 INFO neutron.agent.dhcp.agent [None req-96e2fad3-f092-4aa8-b211-4e99a406af9d - - - - - -] DHCP configuration for ports {'956b4edd-2866-461e-a334-e568ca3af226'} is completed#033[00m Feb 20 04:57:10 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve49", "tenant_id": "1401fb23701440858ed7175cc4dba63b", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:57:10 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, tenant_id:1401fb23701440858ed7175cc4dba63b, vol_name:cephfs) < "" Feb 20 04:57:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Feb 20 04:57:10 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Feb 20 04:57:10 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID eve49 with tenant 1401fb23701440858ed7175cc4dba63b Feb 20 04:57:10 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:10.958 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:6d:09 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec4f10d3-21c0-4f32-97fb-37b6b004b601, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3568ed7b-9263-43af-b4fd-ae333afc9a3b) old=Port_Binding(mac=['fa:16:3e:68:6d:09 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:57:10 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:10.960 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3568ed7b-9263-43af-b4fd-ae333afc9a3b in datapath 039b20b8-16a8-495e-968a-63fcd66a566c updated#033[00m Feb 20 04:57:10 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:10.962 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port 1b8aed8a-4745-4b4e-915f-cd721fc6c778 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:57:10 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:10.962 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 039b20b8-16a8-495e-968a-63fcd66a566c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:57:10 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:10.963 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[55482e0b-9013-4913-b42f-c3c4361ebbea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:57:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:57:10 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:57:10 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:57:11 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, tenant_id:1401fb23701440858ed7175cc4dba63b, vol_name:cephfs) < "" Feb 20 04:57:11 localhost dnsmasq[322203]: exiting on receipt of SIGTERM Feb 20 04:57:11 localhost systemd[1]: tmp-crun.6Ef0fO.mount: Deactivated successfully. Feb 20 04:57:11 localhost podman[322258]: 2026-02-20 09:57:11.07181898 +0000 UTC m=+0.076235937 container kill 10f0240a8d7ae45938ce2c5825dbaf92ab60b70a60e4931d6b25c152f35b1aae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true) Feb 20 04:57:11 localhost systemd[1]: libpod-10f0240a8d7ae45938ce2c5825dbaf92ab60b70a60e4931d6b25c152f35b1aae.scope: Deactivated successfully. Feb 20 04:57:11 localhost podman[322272]: 2026-02-20 09:57:11.140437453 +0000 UTC m=+0.057930880 container died 10f0240a8d7ae45938ce2c5825dbaf92ab60b70a60e4931d6b25c152f35b1aae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:57:11 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Feb 20 04:57:11 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:57:11 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:57:11 localhost podman[322272]: 2026-02-20 09:57:11.170448031 +0000 UTC m=+0.087941418 container cleanup 10f0240a8d7ae45938ce2c5825dbaf92ab60b70a60e4931d6b25c152f35b1aae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 04:57:11 localhost systemd[1]: libpod-conmon-10f0240a8d7ae45938ce2c5825dbaf92ab60b70a60e4931d6b25c152f35b1aae.scope: Deactivated successfully. Feb 20 04:57:11 localhost podman[322277]: 2026-02-20 09:57:11.220701196 +0000 UTC m=+0.127885009 container remove 10f0240a8d7ae45938ce2c5825dbaf92ab60b70a60e4931d6b25c152f35b1aae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 20 04:57:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 72 KiB/s wr, 80 op/s Feb 20 04:57:11 localhost systemd[1]: var-lib-containers-storage-overlay-dfce898854dcdff490ff54c951c70b36c8908f4aa65970f80dfe95103e7e6471-merged.mount: Deactivated successfully. Feb 20 04:57:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10f0240a8d7ae45938ce2c5825dbaf92ab60b70a60e4931d6b25c152f35b1aae-userdata-shm.mount: Deactivated successfully. Feb 20 04:57:12 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:12.027 2 INFO neutron.agent.securitygroups_rpc [None req-6ce6cc4b-c215-41b7-8af5-e062eb4d8872 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['b7f2b362-1261-45d0-afca-4d7d4dc43da1', '66935af6-6884-4649-9f3d-6c32279f86ee', '46d5d21d-63a5-4d3d-a013-7b21b89cdba7']#033[00m Feb 20 04:57:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e169 do_prune osdmap full prune enabled Feb 20 04:57:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e170 e170: 6 total, 6 up, 6 in Feb 20 04:57:12 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e170: 6 total, 6 up, 6 in Feb 20 04:57:12 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:12.597 2 INFO neutron.agent.securitygroups_rpc [None req-9d686e86-35e8-431c-8fc4-b6265d5fa0d0 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['b7f2b362-1261-45d0-afca-4d7d4dc43da1', '66935af6-6884-4649-9f3d-6c32279f86ee']#033[00m Feb 20 04:57:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "8b7ef8b3-43af-41f4-8a24-ef7ee4fe0934_5c6fcb89-e7b9-4769-a40e-771ef85df0e8", "force": true, "format": "json"}]: dispatch Feb 20 04:57:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8b7ef8b3-43af-41f4-8a24-ef7ee4fe0934_5c6fcb89-e7b9-4769-a40e-771ef85df0e8, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:57:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:57:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8b7ef8b3-43af-41f4-8a24-ef7ee4fe0934_5c6fcb89-e7b9-4769-a40e-771ef85df0e8, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "8b7ef8b3-43af-41f4-8a24-ef7ee4fe0934", "force": true, "format": "json"}]: dispatch Feb 20 04:57:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8b7ef8b3-43af-41f4-8a24-ef7ee4fe0934, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:12 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Feb 20 04:57:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' Feb 20 04:57:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta.tmp' to config b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd/.meta' Feb 20 04:57:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8b7ef8b3-43af-41f4-8a24-ef7ee4fe0934, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:57:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e170 do_prune osdmap full prune enabled Feb 20 04:57:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e171 e171: 6 total, 6 up, 6 in Feb 20 04:57:12 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e171: 6 total, 6 up, 6 in Feb 20 04:57:13 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "snap_name": "faa1b419-44e8-41e3-a207-23a7805bf4c9_428b559d-5ff8-4ee5-af31-0b6d83df877e", "force": true, "format": "json"}]: dispatch Feb 20 04:57:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faa1b419-44e8-41e3-a207-23a7805bf4c9_428b559d-5ff8-4ee5-af31-0b6d83df877e, sub_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, vol_name:cephfs) < "" Feb 20 04:57:13 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c/.meta.tmp' Feb 20 04:57:13 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c/.meta.tmp' to config b'/volumes/_nogroup/56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c/.meta' Feb 20 04:57:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faa1b419-44e8-41e3-a207-23a7805bf4c9_428b559d-5ff8-4ee5-af31-0b6d83df877e, sub_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, vol_name:cephfs) < "" Feb 20 04:57:13 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "snap_name": "faa1b419-44e8-41e3-a207-23a7805bf4c9", "force": true, "format": "json"}]: dispatch Feb 20 04:57:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faa1b419-44e8-41e3-a207-23a7805bf4c9, sub_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, vol_name:cephfs) < "" Feb 20 04:57:13 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c/.meta.tmp' Feb 20 04:57:13 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c/.meta.tmp' to config b'/volumes/_nogroup/56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c/.meta' Feb 20 04:57:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faa1b419-44e8-41e3-a207-23a7805bf4c9, sub_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, vol_name:cephfs) < "" Feb 20 04:57:13 localhost nova_compute[280804]: 2026-02-20 09:57:13.315 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 42 KiB/s wr, 64 op/s Feb 20 04:57:13 localhost podman[322350]: Feb 20 04:57:13 localhost podman[322350]: 2026-02-20 09:57:13.924355378 +0000 UTC m=+0.090266149 container create 40ad7c0a54d75c67d85eceaa18c92edf54ce12b35134a546ba36f74861049e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 04:57:13 localhost systemd[1]: Started libpod-conmon-40ad7c0a54d75c67d85eceaa18c92edf54ce12b35134a546ba36f74861049e18.scope. Feb 20 04:57:13 localhost systemd[1]: Started libcrun container. Feb 20 04:57:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8764e597461e7b32e4a9ad3f502baae1e09ca778663671596b6dcd6a501c4b8d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:57:13 localhost podman[322350]: 2026-02-20 09:57:13.881741436 +0000 UTC m=+0.047652247 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:57:13 localhost podman[322350]: 2026-02-20 09:57:13.988615936 +0000 UTC m=+0.154526707 container init 40ad7c0a54d75c67d85eceaa18c92edf54ce12b35134a546ba36f74861049e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:57:13 localhost podman[322350]: 2026-02-20 09:57:13.998285773 +0000 UTC m=+0.164196544 container start 40ad7c0a54d75c67d85eceaa18c92edf54ce12b35134a546ba36f74861049e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2) Feb 20 04:57:14 localhost dnsmasq[322368]: started, version 2.85 cachesize 150 Feb 20 04:57:14 localhost dnsmasq[322368]: DNS service limited to local subnets Feb 20 04:57:14 localhost dnsmasq[322368]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:57:14 localhost dnsmasq[322368]: warning: no upstream servers configured Feb 20 04:57:14 localhost dnsmasq-dhcp[322368]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:57:14 localhost dnsmasq-dhcp[322368]: DHCP, static leases only on 10.100.0.16, lease time 1d Feb 20 04:57:14 localhost dnsmasq-dhcp[322368]: DHCP, static leases only on 10.100.0.32, lease time 1d Feb 20 04:57:14 localhost dnsmasq[322368]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/addn_hosts - 1 addresses Feb 20 04:57:14 localhost dnsmasq-dhcp[322368]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/host Feb 20 04:57:14 localhost dnsmasq-dhcp[322368]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/opts Feb 20 04:57:14 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:14.085 263745 INFO neutron.agent.dhcp.agent [None req-c3723995-ad83-40aa-b38d-419b82bb0a17 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:57:09Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=956b4edd-2866-461e-a334-e568ca3af226, ip_allocation=immediate, mac_address=fa:16:3e:54:73:cb, name=tempest-PortsTestJSON-209006605, network_id=039b20b8-16a8-495e-968a-63fcd66a566c, port_security_enabled=True, project_id=62dd1d7e0cf547678612304aba2895e2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['66935af6-6884-4649-9f3d-6c32279f86ee', 'b7f2b362-1261-45d0-afca-4d7d4dc43da1'], standard_attr_id=2501, status=DOWN, tags=[], tenant_id=62dd1d7e0cf547678612304aba2895e2, updated_at=2026-02-20T09:57:11Z on network 039b20b8-16a8-495e-968a-63fcd66a566c#033[00m Feb 20 04:57:14 localhost dnsmasq-dhcp[322368]: DHCPRELEASE(tapd4de62b2-b6) 10.100.0.9 fa:16:3e:54:73:cb Feb 20 04:57:14 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve48", "tenant_id": "1401fb23701440858ed7175cc4dba63b", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:57:14 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, tenant_id:1401fb23701440858ed7175cc4dba63b, vol_name:cephfs) < "" Feb 20 04:57:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Feb 20 04:57:14 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Feb 20 04:57:14 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID eve48 with tenant 1401fb23701440858ed7175cc4dba63b Feb 20 04:57:14 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:14.277 263745 INFO neutron.agent.dhcp.agent [None req-8af99d61-385b-4c3f-a41d-e71ff64f8072 - - - - - -] DHCP configuration for ports {'d4de62b2-b6cb-4b77-af79-1237680b2f9a', '5d8efbfc-f220-4440-9b71-5ab21a9bb5f9', '956b4edd-2866-461e-a334-e568ca3af226', '3568ed7b-9263-43af-b4fd-ae333afc9a3b'} is completed#033[00m Feb 20 04:57:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:57:14 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:57:14 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:57:14 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, tenant_id:1401fb23701440858ed7175cc4dba63b, vol_name:cephfs) < "" Feb 20 04:57:14 localhost dnsmasq[322368]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/addn_hosts - 1 addresses Feb 20 04:57:14 localhost dnsmasq-dhcp[322368]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/host Feb 20 04:57:14 localhost dnsmasq-dhcp[322368]: read /var/lib/neutron/dhcp/039b20b8-16a8-495e-968a-63fcd66a566c/opts Feb 20 04:57:14 localhost podman[322387]: 2026-02-20 09:57:14.74359305 +0000 UTC m=+0.073406692 container kill 40ad7c0a54d75c67d85eceaa18c92edf54ce12b35134a546ba36f74861049e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 20 04:57:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:57:14 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1303196629' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:57:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:57:14 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1303196629' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:57:15 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:15.002 263745 INFO neutron.agent.dhcp.agent [None req-6ba996cd-100a-44fa-be68-7a645098de58 - - - - - -] DHCP configuration for ports {'956b4edd-2866-461e-a334-e568ca3af226'} is completed#033[00m Feb 20 04:57:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:15.046 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:6d:09'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '9', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec4f10d3-21c0-4f32-97fb-37b6b004b601, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3568ed7b-9263-43af-b4fd-ae333afc9a3b) old=Port_Binding(mac=['fa:16:3e:68:6d:09 10.100.0.18 10.100.0.2 10.100.0.34'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:57:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:15.049 161766 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3568ed7b-9263-43af-b4fd-ae333afc9a3b in datapath 039b20b8-16a8-495e-968a-63fcd66a566c updated#033[00m Feb 20 04:57:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:15.051 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 039b20b8-16a8-495e-968a-63fcd66a566c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:57:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:15.053 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[629c7a76-9632-4928-acd0-c317ac72e8a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:57:15 localhost nova_compute[280804]: 2026-02-20 09:57:15.146 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:15 localhost dnsmasq[322368]: exiting on receipt of SIGTERM Feb 20 04:57:15 localhost podman[322424]: 2026-02-20 09:57:15.191563035 +0000 UTC m=+0.062597874 container kill 40ad7c0a54d75c67d85eceaa18c92edf54ce12b35134a546ba36f74861049e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:57:15 localhost systemd[1]: libpod-40ad7c0a54d75c67d85eceaa18c92edf54ce12b35134a546ba36f74861049e18.scope: Deactivated successfully. Feb 20 04:57:15 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Feb 20 04:57:15 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:57:15 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:57:15 localhost podman[322433]: 2026-02-20 09:57:15.246703311 +0000 UTC m=+0.044519195 container died 40ad7c0a54d75c67d85eceaa18c92edf54ce12b35134a546ba36f74861049e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:57:15 localhost podman[322433]: 2026-02-20 09:57:15.329068719 +0000 UTC m=+0.126884613 container cleanup 40ad7c0a54d75c67d85eceaa18c92edf54ce12b35134a546ba36f74861049e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 20 04:57:15 localhost systemd[1]: libpod-conmon-40ad7c0a54d75c67d85eceaa18c92edf54ce12b35134a546ba36f74861049e18.scope: Deactivated successfully. Feb 20 04:57:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e171 do_prune osdmap full prune enabled Feb 20 04:57:15 localhost podman[322440]: 2026-02-20 09:57:15.352809711 +0000 UTC m=+0.138912964 container remove 40ad7c0a54d75c67d85eceaa18c92edf54ce12b35134a546ba36f74861049e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039b20b8-16a8-495e-968a-63fcd66a566c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true) Feb 20 04:57:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e172 e172: 6 total, 6 up, 6 in Feb 20 04:57:15 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e172: 6 total, 6 up, 6 in Feb 20 04:57:15 localhost nova_compute[280804]: 2026-02-20 09:57:15.371 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:15 localhost ovn_controller[155916]: 2026-02-20T09:57:15Z|00233|binding|INFO|Releasing lport d4de62b2-b6cb-4b77-af79-1237680b2f9a from this chassis (sb_readonly=0) Feb 20 04:57:15 localhost ovn_controller[155916]: 2026-02-20T09:57:15Z|00234|binding|INFO|Setting lport d4de62b2-b6cb-4b77-af79-1237680b2f9a down in Southbound Feb 20 04:57:15 localhost kernel: device tapd4de62b2-b6 left promiscuous mode Feb 20 04:57:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:15.382 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.3/28 10.100.0.36/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec4f10d3-21c0-4f32-97fb-37b6b004b601, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=d4de62b2-b6cb-4b77-af79-1237680b2f9a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:57:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:15.384 161766 INFO neutron.agent.ovn.metadata.agent [-] Port d4de62b2-b6cb-4b77-af79-1237680b2f9a in datapath 039b20b8-16a8-495e-968a-63fcd66a566c unbound from our chassis#033[00m Feb 20 04:57:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:15.385 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 039b20b8-16a8-495e-968a-63fcd66a566c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:57:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:15.386 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[b396656e-eeb2-4c51-854a-ec005272113a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:57:15 localhost nova_compute[280804]: 2026-02-20 09:57:15.392 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:15 localhost nova_compute[280804]: 2026-02-20 09:57:15.394 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 194 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 92 KiB/s rd, 90 KiB/s wr, 140 op/s Feb 20 04:57:15 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:15.665 263745 INFO neutron.agent.dhcp.agent [None req-35fc73bc-c506-4457-a80a-8f6382581b8c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:57:15 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:15.665 263745 INFO neutron.agent.dhcp.agent [None req-35fc73bc-c506-4457-a80a-8f6382581b8c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:57:15 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:15.827 2 INFO neutron.agent.securitygroups_rpc [None req-e7815d36-a39d-42c8-a497-7fe4eae772f9 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']#033[00m Feb 20 04:57:15 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:15.859 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:57:15 localhost systemd[1]: var-lib-containers-storage-overlay-8764e597461e7b32e4a9ad3f502baae1e09ca778663671596b6dcd6a501c4b8d-merged.mount: Deactivated successfully. Feb 20 04:57:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-40ad7c0a54d75c67d85eceaa18c92edf54ce12b35134a546ba36f74861049e18-userdata-shm.mount: Deactivated successfully. Feb 20 04:57:15 localhost systemd[1]: run-netns-qdhcp\x2d039b20b8\x2d16a8\x2d495e\x2d968a\x2d63fcd66a566c.mount: Deactivated successfully. Feb 20 04:57:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "format": "json"}]: dispatch Feb 20 04:57:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9754040c-efe1-4b31-bb60-2e1a242177cd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:57:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9754040c-efe1-4b31-bb60-2e1a242177cd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:57:16 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:57:16.025+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9754040c-efe1-4b31-bb60-2e1a242177cd' of type subvolume Feb 20 04:57:16 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9754040c-efe1-4b31-bb60-2e1a242177cd' of type subvolume Feb 20 04:57:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "force": true, "format": "json"}]: dispatch Feb 20 04:57:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9754040c-efe1-4b31-bb60-2e1a242177cd'' moved to trashcan Feb 20 04:57:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:57:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9754040c-efe1-4b31-bb60-2e1a242177cd, vol_name:cephfs) < "" Feb 20 04:57:16 localhost podman[241347]: time="2026-02-20T09:57:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:57:16 localhost podman[241347]: @ - - [20/Feb/2026:09:57:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:57:16 localhost podman[241347]: @ - - [20/Feb/2026:09:57:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18785 "" "Go-http-client/1.1" Feb 20 04:57:16 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:16.332 2 INFO neutron.agent.securitygroups_rpc [None req-3bc90bc6-4752-435d-941c-f0e75fc5d0a5 e8d99e5aba074cfb8aea01d99045d2af 8a08202c1391432d972dc0430612e0e0 - - default default] Security group member updated ['49b521a4-2cce-4f1a-b690-2fa2cab68db5']#033[00m Feb 20 04:57:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "format": "json"}]: dispatch Feb 20 04:57:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:57:16 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e172 do_prune osdmap full prune enabled Feb 20 04:57:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:57:16 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:57:16.355+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c' of type subvolume Feb 20 04:57:16 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c' of type subvolume Feb 20 04:57:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "force": true, "format": "json"}]: dispatch Feb 20 04:57:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, vol_name:cephfs) < "" Feb 20 04:57:16 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e173 e173: 6 total, 6 up, 6 in Feb 20 04:57:16 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e173: 6 total, 6 up, 6 in Feb 20 04:57:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c'' moved to trashcan Feb 20 04:57:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:57:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c, vol_name:cephfs) < "" Feb 20 04:57:16 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:16.524 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:57:16 localhost nova_compute[280804]: 2026-02-20 09:57:16.829 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:16 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:16.930 2 INFO neutron.agent.securitygroups_rpc [None req-4f303a3e-0093-4654-a2f8-5b0853d1acad 3fd5694d6e624148892ddc3041d2f0e1 4bc7f22347de4004b73776eab4064bd0 - - default default] Security group member updated ['c599d16d-0283-4cf2-8a39-4a506ff8f2f0']#033[00m Feb 20 04:57:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:57:17 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2731859964' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:57:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:57:17 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2731859964' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.264 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:57:17.269 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:57:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:57:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:57:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e173 do_prune osdmap full prune enabled Feb 20 04:57:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e174 e174: 6 total, 6 up, 6 in Feb 20 04:57:17 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e174: 6 total, 6 up, 6 in Feb 20 04:57:17 localhost podman[322463]: 2026-02-20 09:57:17.444933909 +0000 UTC m=+0.083132721 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127) Feb 20 04:57:17 localhost podman[322464]: 2026-02-20 09:57:17.532530537 +0000 UTC m=+0.167297008 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 04:57:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v353: 177 pgs: 177 active+clean; 194 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 110 KiB/s wr, 102 op/s Feb 20 04:57:17 localhost podman[322463]: 2026-02-20 09:57:17.558057756 +0000 UTC m=+0.196256538 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 20 04:57:17 localhost podman[322464]: 2026-02-20 09:57:17.56689804 +0000 UTC m=+0.201664521 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Feb 20 04:57:17 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:57:17 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:57:17 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve48", "format": "json"}]: dispatch Feb 20 04:57:17 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Feb 20 04:57:17 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Feb 20 04:57:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) Feb 20 04:57:17 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Feb 20 04:57:17 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Feb 20 04:57:17 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:17 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve48", "format": "json"}]: dispatch Feb 20 04:57:17 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:17 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739 Feb 20 04:57:17 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:57:17 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Feb 20 04:57:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e174 do_prune osdmap full prune enabled Feb 20 04:57:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e175 e175: 6 total, 6 up, 6 in Feb 20 04:57:17 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e175: 6 total, 6 up, 6 in Feb 20 04:57:18 localhost nova_compute[280804]: 2026-02-20 09:57:18.317 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:18 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Feb 20 04:57:18 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Feb 20 04:57:18 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Feb 20 04:57:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:57:19 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e175 do_prune osdmap full prune enabled Feb 20 04:57:19 localhost podman[322509]: 2026-02-20 09:57:19.438916402 +0000 UTC m=+0.076837054 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 04:57:19 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e176 e176: 6 total, 6 up, 6 in Feb 20 04:57:19 localhost podman[322509]: 2026-02-20 09:57:19.451773343 +0000 UTC m=+0.089694065 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:57:19 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e176: 6 total, 6 up, 6 in Feb 20 04:57:19 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:57:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 194 MiB data, 958 MiB used, 41 GiB / 42 GiB avail; 2.5 KiB/s rd, 78 KiB/s wr, 11 op/s Feb 20 04:57:20 localhost nova_compute[280804]: 2026-02-20 09:57:20.148 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:20 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:20.854 2 INFO neutron.agent.securitygroups_rpc [None req-41f9c3ce-b340-465f-aa72-7be8aab7d24c 3fd5694d6e624148892ddc3041d2f0e1 4bc7f22347de4004b73776eab4064bd0 - - default default] Security group member updated ['c599d16d-0283-4cf2-8a39-4a506ff8f2f0']#033[00m Feb 20 04:57:21 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve47", "tenant_id": "1401fb23701440858ed7175cc4dba63b", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:57:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, tenant_id:1401fb23701440858ed7175cc4dba63b, vol_name:cephfs) < "" Feb 20 04:57:21 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Feb 20 04:57:21 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Feb 20 04:57:21 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID eve47 with tenant 1401fb23701440858ed7175cc4dba63b Feb 20 04:57:21 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:57:21 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:57:21 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:57:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, tenant_id:1401fb23701440858ed7175cc4dba63b, vol_name:cephfs) < "" Feb 20 04:57:21 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e176 do_prune osdmap full prune enabled Feb 20 04:57:21 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Feb 20 04:57:21 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:57:21 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:57:21 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e177 e177: 6 total, 6 up, 6 in Feb 20 04:57:21 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e177: 6 total, 6 up, 6 in Feb 20 04:57:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 5.1 MiB/s rd, 83 KiB/s wr, 175 op/s Feb 20 04:57:21 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:21.741 2 INFO neutron.agent.securitygroups_rpc [None req-84019169-f531-4b37-ab25-b8fba57ed27f e8d99e5aba074cfb8aea01d99045d2af 8a08202c1391432d972dc0430612e0e0 - - default default] Security group member updated ['49b521a4-2cce-4f1a-b690-2fa2cab68db5']#033[00m Feb 20 04:57:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:57:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e177 do_prune osdmap full prune enabled Feb 20 04:57:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e178 e178: 6 total, 6 up, 6 in Feb 20 04:57:22 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e178: 6 total, 6 up, 6 in Feb 20 04:57:23 localhost nova_compute[280804]: 2026-02-20 09:57:23.320 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:57:23 Feb 20 04:57:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:57:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 04:57:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['manila_data', 'volumes', 'images', 'manila_metadata', '.mgr', 'backups', 'vms'] Feb 20 04:57:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 04:57:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:57:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:57:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:57:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Feb 20 04:57:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 20 04:57:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:57:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:57:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 60 KiB/s wr, 127 op/s Feb 20 04:57:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32) Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:57:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00017802772543271914 of space, bias 4.0, pg target 0.14171006944444445 quantized to 16 (current 16) Feb 20 04:57:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:57:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:57:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:57:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:57:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:57:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:57:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:57:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:57:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:57:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:57:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e178 do_prune osdmap full prune enabled Feb 20 04:57:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e179 e179: 6 total, 6 up, 6 in Feb 20 04:57:23 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e179: 6 total, 6 up, 6 in Feb 20 04:57:24 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve47", "format": "json"}]: dispatch Feb 20 04:57:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:24 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Feb 20 04:57:24 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Feb 20 04:57:24 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) Feb 20 04:57:24 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Feb 20 04:57:24 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Feb 20 04:57:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:24 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve47", "format": "json"}]: dispatch Feb 20 04:57:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:24 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739 Feb 20 04:57:24 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:57:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e50: np0005625202.arwxwo(active, since 9m), standbys: np0005625203.lonygy, np0005625204.exgrzx Feb 20 04:57:24 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Feb 20 04:57:24 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Feb 20 04:57:24 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Feb 20 04:57:25 localhost nova_compute[280804]: 2026-02-20 09:57:25.194 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 241 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 224 op/s Feb 20 04:57:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e179 do_prune osdmap full prune enabled Feb 20 04:57:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e180 e180: 6 total, 6 up, 6 in Feb 20 04:57:25 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e180: 6 total, 6 up, 6 in Feb 20 04:57:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:57:26 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1300248318' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:57:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:57:26 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1300248318' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:57:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e180 do_prune osdmap full prune enabled Feb 20 04:57:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e181 e181: 6 total, 6 up, 6 in Feb 20 04:57:27 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e181: 6 total, 6 up, 6 in Feb 20 04:57:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 241 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 90 KiB/s rd, 4.6 MiB/s wr, 143 op/s Feb 20 04:57:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:57:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e181 do_prune osdmap full prune enabled Feb 20 04:57:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e182 e182: 6 total, 6 up, 6 in Feb 20 04:57:27 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e182: 6 total, 6 up, 6 in Feb 20 04:57:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:57:27 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3824275205' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:57:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:57:27 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3824275205' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:57:28 localhost openstack_network_exporter[243776]: ERROR 09:57:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:57:28 localhost openstack_network_exporter[243776]: Feb 20 04:57:28 localhost openstack_network_exporter[243776]: ERROR 09:57:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:57:28 localhost openstack_network_exporter[243776]: Feb 20 04:57:28 localhost nova_compute[280804]: 2026-02-20 09:57:28.321 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:57:28 localhost podman[322534]: 2026-02-20 09:57:28.448131057 +0000 UTC m=+0.087203259 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 04:57:28 localhost podman[322534]: 2026-02-20 09:57:28.455865843 +0000 UTC m=+0.094938035 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:57:28 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:57:28 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve49", "format": "json"}]: dispatch Feb 20 04:57:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Feb 20 04:57:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Feb 20 04:57:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) Feb 20 04:57:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Feb 20 04:57:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Feb 20 04:57:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve49", "format": "json"}]: dispatch Feb 20 04:57:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739 Feb 20 04:57:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:57:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "format": "json"}]: dispatch Feb 20 04:57:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:57:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:57:29 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '397e3bc9-4e24-4148-9154-b1a0a65a1d98' of type subvolume Feb 20 04:57:29 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:57:29.313+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '397e3bc9-4e24-4148-9154-b1a0a65a1d98' of type subvolume Feb 20 04:57:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "force": true, "format": "json"}]: dispatch Feb 20 04:57:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98'' moved to trashcan Feb 20 04:57:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:57:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:397e3bc9-4e24-4148-9154-b1a0a65a1d98, vol_name:cephfs) < "" Feb 20 04:57:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 233 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 95 KiB/s rd, 3.9 MiB/s wr, 152 op/s Feb 20 04:57:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Feb 20 04:57:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Feb 20 04:57:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Feb 20 04:57:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e182 do_prune osdmap full prune enabled Feb 20 04:57:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e183 e183: 6 total, 6 up, 6 in Feb 20 04:57:30 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e183: 6 total, 6 up, 6 in Feb 20 04:57:30 localhost nova_compute[280804]: 2026-02-20 09:57:30.198 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 195 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 103 KiB/s rd, 66 KiB/s wr, 148 op/s Feb 20 04:57:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e183 do_prune osdmap full prune enabled Feb 20 04:57:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e184 e184: 6 total, 6 up, 6 in Feb 20 04:57:32 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e184: 6 total, 6 up, 6 in Feb 20 04:57:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:57:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e184 do_prune osdmap full prune enabled Feb 20 04:57:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e185 e185: 6 total, 6 up, 6 in Feb 20 04:57:32 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e185: 6 total, 6 up, 6 in Feb 20 04:57:33 localhost nova_compute[280804]: 2026-02-20 09:57:33.325 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:33 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:33.417 2 INFO neutron.agent.securitygroups_rpc [None req-203ffc30-16ac-4832-a92b-6d9503978c8f fd5cbccf037047f8a8d2b9488223d9dc 92c7b74ddebf4a4a82ffeb6b9b8a9111 - - default default] Security group member updated ['23f09a95-adc3-4f59-96fa-bbbacd5ff83a']#033[00m Feb 20 04:57:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 195 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 102 KiB/s rd, 65 KiB/s wr, 146 op/s Feb 20 04:57:33 localhost sshd[322558]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:57:33 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:33.803 2 INFO neutron.agent.securitygroups_rpc [None req-203ffc30-16ac-4832-a92b-6d9503978c8f fd5cbccf037047f8a8d2b9488223d9dc 92c7b74ddebf4a4a82ffeb6b9b8a9111 - - default default] Security group member updated ['23f09a95-adc3-4f59-96fa-bbbacd5ff83a']#033[00m Feb 20 04:57:34 localhost sshd[322560]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:57:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e185 do_prune osdmap full prune enabled Feb 20 04:57:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e186 e186: 6 total, 6 up, 6 in Feb 20 04:57:34 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e186: 6 total, 6 up, 6 in Feb 20 04:57:34 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:34.821 2 INFO neutron.agent.securitygroups_rpc [None req-3f70ffff-dbcf-4468-97f0-19b384f44318 fd5cbccf037047f8a8d2b9488223d9dc 92c7b74ddebf4a4a82ffeb6b9b8a9111 - - default default] Security group member updated ['23f09a95-adc3-4f59-96fa-bbbacd5ff83a']#033[00m Feb 20 04:57:35 localhost nova_compute[280804]: 2026-02-20 09:57:35.201 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:35 localhost neutron_sriov_agent[256551]: 2026-02-20 09:57:35.479 2 INFO neutron.agent.securitygroups_rpc [None req-16cacc97-a725-4413-b643-7033155fe483 fd5cbccf037047f8a8d2b9488223d9dc 92c7b74ddebf4a4a82ffeb6b9b8a9111 - - default default] Security group member updated ['23f09a95-adc3-4f59-96fa-bbbacd5ff83a']#033[00m Feb 20 04:57:35 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:35.504 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:57:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 10 KiB/s wr, 91 op/s Feb 20 04:57:36 localhost sshd[322562]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:57:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:57:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:57:36 localhost systemd[1]: tmp-crun.oxcYNP.mount: Deactivated successfully. Feb 20 04:57:36 localhost podman[322564]: 2026-02-20 09:57:36.870582162 +0000 UTC m=+0.086968622 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:57:36 localhost podman[322564]: 2026-02-20 09:57:36.887598994 +0000 UTC m=+0.103985454 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Feb 20 04:57:36 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:57:36 localhost podman[322582]: 2026-02-20 09:57:36.953729782 +0000 UTC m=+0.078610410 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9/ubi-minimal, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, version=9.7, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git) Feb 20 04:57:36 localhost podman[322582]: 2026-02-20 09:57:36.997307169 +0000 UTC m=+0.122187797 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, version=9.7, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z) Feb 20 04:57:37 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:57:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 9.2 KiB/s wr, 83 op/s Feb 20 04:57:37 localhost sshd[322602]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:57:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:57:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e186 do_prune osdmap full prune enabled Feb 20 04:57:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e187 e187: 6 total, 6 up, 6 in Feb 20 04:57:37 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e187: 6 total, 6 up, 6 in Feb 20 04:57:38 localhost nova_compute[280804]: 2026-02-20 09:57:38.361 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 16 KiB/s wr, 76 op/s Feb 20 04:57:40 localhost nova_compute[280804]: 2026-02-20 09:57:40.233 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 60 KiB/s rd, 14 KiB/s wr, 83 op/s Feb 20 04:57:42 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:42.385 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:57:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:57:42 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:57:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:57:42 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:57:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:57:42 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:57:42 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 880d6afe-07a1-4908-837c-68da43272f56 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:57:42 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 880d6afe-07a1-4908-837c-68da43272f56 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:57:42 localhost ceph-mgr[286565]: [progress INFO root] Completed event 880d6afe-07a1-4908-837c-68da43272f56 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:57:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:57:42 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:57:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:57:42 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:57:42 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:57:43 localhost nova_compute[280804]: 2026-02-20 09:57:43.395 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:43 localhost sshd[322690]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:57:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 12 KiB/s wr, 71 op/s Feb 20 04:57:43 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:57:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:57:43 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:57:44 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:57:45 localhost nova_compute[280804]: 2026-02-20 09:57:45.235 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v380: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 5.7 KiB/s wr, 16 op/s Feb 20 04:57:46 localhost podman[241347]: time="2026-02-20T09:57:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:57:46 localhost podman[241347]: @ - - [20/Feb/2026:09:57:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:57:46 localhost podman[241347]: @ - - [20/Feb/2026:09:57:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18783 "" "Go-http-client/1.1" Feb 20 04:57:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 5.7 KiB/s wr, 16 op/s Feb 20 04:57:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:57:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:57:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:57:48 localhost nova_compute[280804]: 2026-02-20 09:57:48.439 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:48 localhost podman[322692]: 2026-02-20 09:57:48.486837875 +0000 UTC m=+0.122314562 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 20 04:57:48 localhost podman[322692]: 2026-02-20 09:57:48.529719225 +0000 UTC m=+0.165195952 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true) Feb 20 04:57:48 localhost systemd[1]: tmp-crun.qIN8xi.mount: Deactivated successfully. Feb 20 04:57:48 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:57:48 localhost podman[322693]: 2026-02-20 09:57:48.562416314 +0000 UTC m=+0.194305116 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible) Feb 20 04:57:48 localhost podman[322693]: 2026-02-20 09:57:48.594829805 +0000 UTC m=+0.226718627 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Feb 20 04:57:48 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:57:49 localhost sshd[322735]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:57:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v382: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 4.9 KiB/s wr, 14 op/s Feb 20 04:57:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:57:49 localhost podman[322737]: 2026-02-20 09:57:49.903326129 +0000 UTC m=+0.088380379 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:57:49 localhost podman[322737]: 2026-02-20 09:57:49.912007609 +0000 UTC m=+0.097061829 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:57:49 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:57:50 localhost nova_compute[280804]: 2026-02-20 09:57:50.235 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:50 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:50.777 263745 INFO neutron.agent.linux.ip_lib [None req-4c011aff-cc9d-4bcf-979c-ca723eea2e23 - - - - - -] Device tap0b2939b2-e4 cannot be used as it has no MAC address#033[00m Feb 20 04:57:50 localhost nova_compute[280804]: 2026-02-20 09:57:50.830 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:50 localhost kernel: device tap0b2939b2-e4 entered promiscuous mode Feb 20 04:57:50 localhost NetworkManager[5967]: [1771581470.8406] manager: (tap0b2939b2-e4): new Generic device (/org/freedesktop/NetworkManager/Devices/47) Feb 20 04:57:50 localhost nova_compute[280804]: 2026-02-20 09:57:50.840 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:50 localhost ovn_controller[155916]: 2026-02-20T09:57:50Z|00235|binding|INFO|Claiming lport 0b2939b2-e441-4b67-9267-d403a108ff43 for this chassis. Feb 20 04:57:50 localhost ovn_controller[155916]: 2026-02-20T09:57:50Z|00236|binding|INFO|0b2939b2-e441-4b67-9267-d403a108ff43: Claiming unknown Feb 20 04:57:50 localhost systemd-udevd[322770]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:57:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:50.854 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.242/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-638a66c9-a568-47e2-b955-e5be1d4001c9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-638a66c9-a568-47e2-b955-e5be1d4001c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80723b5de8af4075aa84c53e89b4d020', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e615dd7-10c9-4b11-b8c2-d3bb2afa00db, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0b2939b2-e441-4b67-9267-d403a108ff43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:57:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:50.857 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 0b2939b2-e441-4b67-9267-d403a108ff43 in datapath 638a66c9-a568-47e2-b955-e5be1d4001c9 bound to our chassis#033[00m Feb 20 04:57:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:50.858 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 638a66c9-a568-47e2-b955-e5be1d4001c9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:57:50 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:50.859 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1fd94a-7712-4edd-9a4f-0695ba3bd0d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:57:50 localhost journal[229367]: ethtool ioctl error on tap0b2939b2-e4: No such device Feb 20 04:57:50 localhost nova_compute[280804]: 2026-02-20 09:57:50.875 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:50 localhost ovn_controller[155916]: 2026-02-20T09:57:50Z|00237|binding|INFO|Setting lport 0b2939b2-e441-4b67-9267-d403a108ff43 ovn-installed in OVS Feb 20 04:57:50 localhost ovn_controller[155916]: 2026-02-20T09:57:50Z|00238|binding|INFO|Setting lport 0b2939b2-e441-4b67-9267-d403a108ff43 up in Southbound Feb 20 04:57:50 localhost journal[229367]: ethtool ioctl error on tap0b2939b2-e4: No such device Feb 20 04:57:50 localhost nova_compute[280804]: 2026-02-20 09:57:50.883 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:50 localhost journal[229367]: ethtool ioctl error on tap0b2939b2-e4: No such device Feb 20 04:57:50 localhost journal[229367]: ethtool ioctl error on tap0b2939b2-e4: No such device Feb 20 04:57:50 localhost journal[229367]: ethtool ioctl error on tap0b2939b2-e4: No such device Feb 20 04:57:50 localhost journal[229367]: ethtool ioctl error on tap0b2939b2-e4: No such device Feb 20 04:57:50 localhost journal[229367]: ethtool ioctl error on tap0b2939b2-e4: No such device Feb 20 04:57:50 localhost journal[229367]: ethtool ioctl error on tap0b2939b2-e4: No such device Feb 20 04:57:50 localhost nova_compute[280804]: 2026-02-20 09:57:50.919 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:50 localhost nova_compute[280804]: 2026-02-20 09:57:50.944 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:51 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "92707d6b-2313-43a9-b4cc-78c6dfb385e9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:57:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:92707d6b-2313-43a9-b4cc-78c6dfb385e9, vol_name:cephfs) < "" Feb 20 04:57:51 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/92707d6b-2313-43a9-b4cc-78c6dfb385e9/.meta.tmp' Feb 20 04:57:51 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/92707d6b-2313-43a9-b4cc-78c6dfb385e9/.meta.tmp' to config b'/volumes/_nogroup/92707d6b-2313-43a9-b4cc-78c6dfb385e9/.meta' Feb 20 04:57:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:92707d6b-2313-43a9-b4cc-78c6dfb385e9, vol_name:cephfs) < "" Feb 20 04:57:51 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "92707d6b-2313-43a9-b4cc-78c6dfb385e9", "format": "json"}]: dispatch Feb 20 04:57:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:92707d6b-2313-43a9-b4cc-78c6dfb385e9, vol_name:cephfs) < "" Feb 20 04:57:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:92707d6b-2313-43a9-b4cc-78c6dfb385e9, vol_name:cephfs) < "" Feb 20 04:57:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s Feb 20 04:57:51 localhost podman[322841]: Feb 20 04:57:51 localhost podman[322841]: 2026-02-20 09:57:51.769430182 +0000 UTC m=+0.088281916 container create fa73ff61512f94b4e9fe2ee8fd50032bd09a18ae8cacb65cc8ef221911dc934d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-638a66c9-a568-47e2-b955-e5be1d4001c9, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Feb 20 04:57:51 localhost systemd[1]: Started libpod-conmon-fa73ff61512f94b4e9fe2ee8fd50032bd09a18ae8cacb65cc8ef221911dc934d.scope. Feb 20 04:57:51 localhost podman[322841]: 2026-02-20 09:57:51.726522952 +0000 UTC m=+0.045374726 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:57:51 localhost systemd[1]: Started libcrun container. Feb 20 04:57:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b8f30d7f893bcdc46619072b801c25e7f37a704ef45b14d4ceff4ecbba6ebac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:57:51 localhost podman[322841]: 2026-02-20 09:57:51.845552296 +0000 UTC m=+0.164404040 container init fa73ff61512f94b4e9fe2ee8fd50032bd09a18ae8cacb65cc8ef221911dc934d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-638a66c9-a568-47e2-b955-e5be1d4001c9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true) Feb 20 04:57:51 localhost podman[322841]: 2026-02-20 09:57:51.858756356 +0000 UTC m=+0.177608090 container start fa73ff61512f94b4e9fe2ee8fd50032bd09a18ae8cacb65cc8ef221911dc934d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-638a66c9-a568-47e2-b955-e5be1d4001c9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:57:51 localhost dnsmasq[322860]: started, version 2.85 cachesize 150 Feb 20 04:57:51 localhost dnsmasq[322860]: DNS service limited to local subnets Feb 20 04:57:51 localhost dnsmasq[322860]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:57:51 localhost dnsmasq[322860]: warning: no upstream servers configured Feb 20 04:57:51 localhost dnsmasq-dhcp[322860]: DHCP, static leases only on 10.100.255.240, lease time 1d Feb 20 04:57:51 localhost dnsmasq[322860]: read /var/lib/neutron/dhcp/638a66c9-a568-47e2-b955-e5be1d4001c9/addn_hosts - 0 addresses Feb 20 04:57:51 localhost dnsmasq-dhcp[322860]: read /var/lib/neutron/dhcp/638a66c9-a568-47e2-b955-e5be1d4001c9/host Feb 20 04:57:51 localhost dnsmasq-dhcp[322860]: read /var/lib/neutron/dhcp/638a66c9-a568-47e2-b955-e5be1d4001c9/opts Feb 20 04:57:52 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:57:52.300 263745 INFO neutron.agent.dhcp.agent [None req-2e61d0eb-e9b9-4d06-97be-17cb4efa0ea5 - - - - - -] DHCP configuration for ports {'545d7366-6980-498a-9739-4645029c1679'} is completed#033[00m Feb 20 04:57:52 localhost systemd[1]: tmp-crun.PUJve5.mount: Deactivated successfully. Feb 20 04:57:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:57:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e187 do_prune osdmap full prune enabled Feb 20 04:57:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e188 e188: 6 total, 6 up, 6 in Feb 20 04:57:52 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e188: 6 total, 6 up, 6 in Feb 20 04:57:53 localhost nova_compute[280804]: 2026-02-20 09:57:53.443 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:57:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:57:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:57:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:57:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail Feb 20 04:57:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:57:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:57:54 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ad9afd43-9e16-42b2-8035-3a7a6ffc2a82", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:57:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ad9afd43-9e16-42b2-8035-3a7a6ffc2a82, vol_name:cephfs) < "" Feb 20 04:57:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ad9afd43-9e16-42b2-8035-3a7a6ffc2a82/.meta.tmp' Feb 20 04:57:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ad9afd43-9e16-42b2-8035-3a7a6ffc2a82/.meta.tmp' to config b'/volumes/_nogroup/ad9afd43-9e16-42b2-8035-3a7a6ffc2a82/.meta' Feb 20 04:57:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ad9afd43-9e16-42b2-8035-3a7a6ffc2a82, vol_name:cephfs) < "" Feb 20 04:57:54 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ad9afd43-9e16-42b2-8035-3a7a6ffc2a82", "format": "json"}]: dispatch Feb 20 04:57:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ad9afd43-9e16-42b2-8035-3a7a6ffc2a82, vol_name:cephfs) < "" Feb 20 04:57:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ad9afd43-9e16-42b2-8035-3a7a6ffc2a82, vol_name:cephfs) < "" Feb 20 04:57:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e188 do_prune osdmap full prune enabled Feb 20 04:57:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e189 e189: 6 total, 6 up, 6 in Feb 20 04:57:55 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e189: 6 total, 6 up, 6 in Feb 20 04:57:55 localhost nova_compute[280804]: 2026-02-20 09:57:55.288 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:55 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ef5178d4-d0bc-4d31-ba72-f530bb14424f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:57:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ef5178d4-d0bc-4d31-ba72-f530bb14424f, vol_name:cephfs) < "" Feb 20 04:57:55 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ef5178d4-d0bc-4d31-ba72-f530bb14424f/.meta.tmp' Feb 20 04:57:55 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ef5178d4-d0bc-4d31-ba72-f530bb14424f/.meta.tmp' to config b'/volumes/_nogroup/ef5178d4-d0bc-4d31-ba72-f530bb14424f/.meta' Feb 20 04:57:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ef5178d4-d0bc-4d31-ba72-f530bb14424f, vol_name:cephfs) < "" Feb 20 04:57:55 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ef5178d4-d0bc-4d31-ba72-f530bb14424f", "format": "json"}]: dispatch Feb 20 04:57:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ef5178d4-d0bc-4d31-ba72-f530bb14424f, vol_name:cephfs) < "" Feb 20 04:57:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v387: 177 pgs: 177 active+clean; 195 MiB data, 962 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 17 KiB/s wr, 25 op/s Feb 20 04:57:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ef5178d4-d0bc-4d31-ba72-f530bb14424f, vol_name:cephfs) < "" Feb 20 04:57:57 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:57.336 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:57:57 localhost ovn_metadata_agent[161761]: 2026-02-20 09:57:57.337 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 04:57:57 localhost nova_compute[280804]: 2026-02-20 09:57:57.339 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 195 MiB data, 962 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 17 KiB/s wr, 25 op/s Feb 20 04:57:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:57:58 localhost openstack_network_exporter[243776]: ERROR 09:57:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:57:58 localhost openstack_network_exporter[243776]: Feb 20 04:57:58 localhost openstack_network_exporter[243776]: ERROR 09:57:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:57:58 localhost openstack_network_exporter[243776]: Feb 20 04:57:58 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ad9afd43-9e16-42b2-8035-3a7a6ffc2a82", "format": "json"}]: dispatch Feb 20 04:57:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ad9afd43-9e16-42b2-8035-3a7a6ffc2a82, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:57:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ad9afd43-9e16-42b2-8035-3a7a6ffc2a82, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:57:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:57:58.209+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ad9afd43-9e16-42b2-8035-3a7a6ffc2a82' of type subvolume Feb 20 04:57:58 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ad9afd43-9e16-42b2-8035-3a7a6ffc2a82' of type subvolume Feb 20 04:57:58 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ad9afd43-9e16-42b2-8035-3a7a6ffc2a82", "force": true, "format": "json"}]: dispatch Feb 20 04:57:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ad9afd43-9e16-42b2-8035-3a7a6ffc2a82, vol_name:cephfs) < "" Feb 20 04:57:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ad9afd43-9e16-42b2-8035-3a7a6ffc2a82'' moved to trashcan Feb 20 04:57:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:57:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ad9afd43-9e16-42b2-8035-3a7a6ffc2a82, vol_name:cephfs) < "" Feb 20 04:57:58 localhost nova_compute[280804]: 2026-02-20 09:57:58.444 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:57:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:57:59 localhost podman[322862]: 2026-02-20 09:57:59.462691648 +0000 UTC m=+0.095579371 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:57:59 localhost podman[322862]: 2026-02-20 09:57:59.501054287 +0000 UTC m=+0.133941990 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:57:59 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:57:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 195 MiB data, 962 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 48 KiB/s wr, 27 op/s Feb 20 04:58:00 localhost nova_compute[280804]: 2026-02-20 09:58:00.344 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:01 localhost nova_compute[280804]: 2026-02-20 09:58:01.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:58:01 localhost nova_compute[280804]: 2026-02-20 09:58:01.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:58:01 localhost nova_compute[280804]: 2026-02-20 09:58:01.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:58:01 localhost nova_compute[280804]: 2026-02-20 09:58:01.522 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:58:01 localhost nova_compute[280804]: 2026-02-20 09:58:01.523 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:58:01 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff9e0809-0706-432a-8bd4-0d7cee6634c8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:58:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff9e0809-0706-432a-8bd4-0d7cee6634c8, vol_name:cephfs) < "" Feb 20 04:58:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v390: 177 pgs: 177 active+clean; 195 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 46 KiB/s wr, 46 op/s Feb 20 04:58:01 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff9e0809-0706-432a-8bd4-0d7cee6634c8/.meta.tmp' Feb 20 04:58:01 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff9e0809-0706-432a-8bd4-0d7cee6634c8/.meta.tmp' to config b'/volumes/_nogroup/ff9e0809-0706-432a-8bd4-0d7cee6634c8/.meta' Feb 20 04:58:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff9e0809-0706-432a-8bd4-0d7cee6634c8, vol_name:cephfs) < "" Feb 20 04:58:01 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff9e0809-0706-432a-8bd4-0d7cee6634c8", "format": "json"}]: dispatch Feb 20 04:58:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff9e0809-0706-432a-8bd4-0d7cee6634c8, vol_name:cephfs) < "" Feb 20 04:58:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff9e0809-0706-432a-8bd4-0d7cee6634c8, vol_name:cephfs) < "" Feb 20 04:58:01 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ef5178d4-d0bc-4d31-ba72-f530bb14424f", "format": "json"}]: dispatch Feb 20 04:58:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ef5178d4-d0bc-4d31-ba72-f530bb14424f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ef5178d4-d0bc-4d31-ba72-f530bb14424f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:01 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:58:01.890+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ef5178d4-d0bc-4d31-ba72-f530bb14424f' of type subvolume Feb 20 04:58:01 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ef5178d4-d0bc-4d31-ba72-f530bb14424f' of type subvolume Feb 20 04:58:01 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ef5178d4-d0bc-4d31-ba72-f530bb14424f", "force": true, "format": "json"}]: dispatch Feb 20 04:58:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ef5178d4-d0bc-4d31-ba72-f530bb14424f, vol_name:cephfs) < "" Feb 20 04:58:01 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ef5178d4-d0bc-4d31-ba72-f530bb14424f'' moved to trashcan Feb 20 04:58:01 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:58:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ef5178d4-d0bc-4d31-ba72-f530bb14424f, vol_name:cephfs) < "" Feb 20 04:58:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:58:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4091018490' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:58:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:58:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4091018490' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:58:02 localhost nova_compute[280804]: 2026-02-20 09:58:02.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:58:02 localhost nova_compute[280804]: 2026-02-20 09:58:02.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:58:02 localhost nova_compute[280804]: 2026-02-20 09:58:02.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:58:02 localhost nova_compute[280804]: 2026-02-20 09:58:02.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:58:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:58:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e189 do_prune osdmap full prune enabled Feb 20 04:58:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e190 e190: 6 total, 6 up, 6 in Feb 20 04:58:02 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e190: 6 total, 6 up, 6 in Feb 20 04:58:03 localhost nova_compute[280804]: 2026-02-20 09:58:03.449 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 195 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 30 KiB/s wr, 22 op/s Feb 20 04:58:04 localhost nova_compute[280804]: 2026-02-20 09:58:04.506 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:58:04 localhost nova_compute[280804]: 2026-02-20 09:58:04.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:58:04 localhost nova_compute[280804]: 2026-02-20 09:58:04.533 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:58:04 localhost nova_compute[280804]: 2026-02-20 09:58:04.534 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:58:04 localhost nova_compute[280804]: 2026-02-20 09:58:04.534 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:58:04 localhost nova_compute[280804]: 2026-02-20 09:58:04.534 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:58:04 localhost nova_compute[280804]: 2026-02-20 09:58:04.534 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:58:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:58:04 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3388708526' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:58:04 localhost nova_compute[280804]: 2026-02-20 09:58:04.990 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.208 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.210 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11473MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.210 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.210 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:58:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:05.217 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:58:04Z, description=, device_id=7557495a-ace7-48ee-949b-56260afaa059, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9647051b-2388-4629-a19f-96710df7d7d6, ip_allocation=immediate, mac_address=fa:16:3e:af:84:0e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2869, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:58:04Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.259 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.259 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.277 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.349 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:05 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:58:05 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:58:05 localhost podman[322925]: 2026-02-20 09:58:05.438013827 +0000 UTC m=+0.058685800 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Feb 20 04:58:05 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:58:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff9e0809-0706-432a-8bd4-0d7cee6634c8", "format": "json"}]: dispatch Feb 20 04:58:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff9e0809-0706-432a-8bd4-0d7cee6634c8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff9e0809-0706-432a-8bd4-0d7cee6634c8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:05 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff9e0809-0706-432a-8bd4-0d7cee6634c8' of type subvolume Feb 20 04:58:05 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:58:05.445+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff9e0809-0706-432a-8bd4-0d7cee6634c8' of type subvolume Feb 20 04:58:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff9e0809-0706-432a-8bd4-0d7cee6634c8", "force": true, "format": "json"}]: dispatch Feb 20 04:58:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff9e0809-0706-432a-8bd4-0d7cee6634c8, vol_name:cephfs) < "" Feb 20 04:58:05 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff9e0809-0706-432a-8bd4-0d7cee6634c8'' moved to trashcan Feb 20 04:58:05 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:58:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff9e0809-0706-432a-8bd4-0d7cee6634c8, vol_name:cephfs) < "" Feb 20 04:58:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v393: 177 pgs: 177 active+clean; 195 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 43 KiB/s wr, 55 op/s Feb 20 04:58:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:05.736 263745 INFO neutron.agent.dhcp.agent [None req-a4083764-8aab-4d56-8ad8-236ce26f5a05 - - - - - -] DHCP configuration for ports {'9647051b-2388-4629-a19f-96710df7d7d6'} is completed#033[00m Feb 20 04:58:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:58:05 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3015932775' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.760 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.765 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.780 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.781 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:58:05 localhost nova_compute[280804]: 2026-02-20 09:58:05.782 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:58:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:05.922 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:58:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:05.923 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:58:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:05.923 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:58:06 localhost nova_compute[280804]: 2026-02-20 09:58:06.633 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:06 localhost nova_compute[280804]: 2026-02-20 09:58:06.778 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:58:07 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:07.339 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:58:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:58:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:58:07 localhost systemd[1]: tmp-crun.AvDuLv.mount: Deactivated successfully. Feb 20 04:58:07 localhost podman[322968]: 2026-02-20 09:58:07.471865119 +0000 UTC m=+0.102314691 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., config_id=openstack_network_exporter) Feb 20 04:58:07 localhost podman[322968]: 2026-02-20 09:58:07.515101548 +0000 UTC m=+0.145551150 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, build-date=2026-02-05T04:57:10Z, release=1770267347, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 04:58:07 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:58:07 localhost podman[322969]: 2026-02-20 09:58:07.57048252 +0000 UTC m=+0.196884444 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS) Feb 20 04:58:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 195 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 43 KiB/s wr, 55 op/s Feb 20 04:58:07 localhost podman[322969]: 2026-02-20 09:58:07.607947625 +0000 UTC m=+0.234349559 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:58:07 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:58:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:58:08 localhost nova_compute[280804]: 2026-02-20 09:58:08.470 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:08 localhost nova_compute[280804]: 2026-02-20 09:58:08.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:58:08 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "134e598c-37ed-480d-a639-35f631513b30", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:58:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:134e598c-37ed-480d-a639-35f631513b30, vol_name:cephfs) < "" Feb 20 04:58:08 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/134e598c-37ed-480d-a639-35f631513b30/.meta.tmp' Feb 20 04:58:08 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/134e598c-37ed-480d-a639-35f631513b30/.meta.tmp' to config b'/volumes/_nogroup/134e598c-37ed-480d-a639-35f631513b30/.meta' Feb 20 04:58:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:134e598c-37ed-480d-a639-35f631513b30, vol_name:cephfs) < "" Feb 20 04:58:08 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "134e598c-37ed-480d-a639-35f631513b30", "format": "json"}]: dispatch Feb 20 04:58:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:134e598c-37ed-480d-a639-35f631513b30, vol_name:cephfs) < "" Feb 20 04:58:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:134e598c-37ed-480d-a639-35f631513b30, vol_name:cephfs) < "" Feb 20 04:58:09 localhost nova_compute[280804]: 2026-02-20 09:58:09.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:58:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v395: 177 pgs: 177 active+clean; 195 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 42 KiB/s wr, 55 op/s Feb 20 04:58:10 localhost nova_compute[280804]: 2026-02-20 09:58:10.407 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e190 do_prune osdmap full prune enabled Feb 20 04:58:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e191 e191: 6 total, 6 up, 6 in Feb 20 04:58:10 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e191: 6 total, 6 up, 6 in Feb 20 04:58:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v397: 177 pgs: 177 active+clean; 195 MiB data, 964 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 50 KiB/s wr, 67 op/s Feb 20 04:58:11 localhost nova_compute[280804]: 2026-02-20 09:58:11.637 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:11 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e191 do_prune osdmap full prune enabled Feb 20 04:58:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e192 e192: 6 total, 6 up, 6 in Feb 20 04:58:12 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e192: 6 total, 6 up, 6 in Feb 20 04:58:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "134e598c-37ed-480d-a639-35f631513b30", "format": "json"}]: dispatch Feb 20 04:58:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:134e598c-37ed-480d-a639-35f631513b30, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:134e598c-37ed-480d-a639-35f631513b30, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:12 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:58:12.121+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '134e598c-37ed-480d-a639-35f631513b30' of type subvolume Feb 20 04:58:12 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '134e598c-37ed-480d-a639-35f631513b30' of type subvolume Feb 20 04:58:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "134e598c-37ed-480d-a639-35f631513b30", "force": true, "format": "json"}]: dispatch Feb 20 04:58:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:134e598c-37ed-480d-a639-35f631513b30, vol_name:cephfs) < "" Feb 20 04:58:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/134e598c-37ed-480d-a639-35f631513b30'' moved to trashcan Feb 20 04:58:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:58:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:134e598c-37ed-480d-a639-35f631513b30, vol_name:cephfs) < "" Feb 20 04:58:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:58:13 localhost nova_compute[280804]: 2026-02-20 09:58:13.472 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 195 MiB data, 964 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 32 KiB/s wr, 27 op/s Feb 20 04:58:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e192 do_prune osdmap full prune enabled Feb 20 04:58:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e193 e193: 6 total, 6 up, 6 in Feb 20 04:58:14 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e193: 6 total, 6 up, 6 in Feb 20 04:58:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e193 do_prune osdmap full prune enabled Feb 20 04:58:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e194 e194: 6 total, 6 up, 6 in Feb 20 04:58:15 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e194: 6 total, 6 up, 6 in Feb 20 04:58:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f29b2b87-a99b-49d8-b340-b09eb6239fb8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:58:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f29b2b87-a99b-49d8-b340-b09eb6239fb8, vol_name:cephfs) < "" Feb 20 04:58:15 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f29b2b87-a99b-49d8-b340-b09eb6239fb8/.meta.tmp' Feb 20 04:58:15 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f29b2b87-a99b-49d8-b340-b09eb6239fb8/.meta.tmp' to config b'/volumes/_nogroup/f29b2b87-a99b-49d8-b340-b09eb6239fb8/.meta' Feb 20 04:58:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f29b2b87-a99b-49d8-b340-b09eb6239fb8, vol_name:cephfs) < "" Feb 20 04:58:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f29b2b87-a99b-49d8-b340-b09eb6239fb8", "format": "json"}]: dispatch Feb 20 04:58:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f29b2b87-a99b-49d8-b340-b09eb6239fb8, vol_name:cephfs) < "" Feb 20 04:58:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f29b2b87-a99b-49d8-b340-b09eb6239fb8, vol_name:cephfs) < "" Feb 20 04:58:15 localhost nova_compute[280804]: 2026-02-20 09:58:15.409 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:15 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:15.541 263745 INFO neutron.agent.linux.ip_lib [None req-c46130e9-cd37-4e77-baef-3cf913ccf403 - - - - - -] Device tapfa9247c7-4d cannot be used as it has no MAC address#033[00m Feb 20 04:58:15 localhost nova_compute[280804]: 2026-02-20 09:58:15.564 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:15 localhost kernel: device tapfa9247c7-4d entered promiscuous mode Feb 20 04:58:15 localhost NetworkManager[5967]: [1771581495.5755] manager: (tapfa9247c7-4d): new Generic device (/org/freedesktop/NetworkManager/Devices/48) Feb 20 04:58:15 localhost nova_compute[280804]: 2026-02-20 09:58:15.577 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:15 localhost systemd-udevd[323016]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:58:15 localhost ovn_controller[155916]: 2026-02-20T09:58:15Z|00239|binding|INFO|Claiming lport fa9247c7-4dd4-4be0-89b9-1383d7c52286 for this chassis. Feb 20 04:58:15 localhost ovn_controller[155916]: 2026-02-20T09:58:15Z|00240|binding|INFO|fa9247c7-4dd4-4be0-89b9-1383d7c52286: Claiming unknown Feb 20 04:58:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 196 MiB data, 968 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 29 KiB/s wr, 88 op/s Feb 20 04:58:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:15.590 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2124592a7614499d8c7693cd6ba353de', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28b11d60-f827-42f7-aa93-3b723a6711dd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fa9247c7-4dd4-4be0-89b9-1383d7c52286) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:58:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:15.592 161766 INFO neutron.agent.ovn.metadata.agent [-] Port fa9247c7-4dd4-4be0-89b9-1383d7c52286 in datapath e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b bound to our chassis#033[00m Feb 20 04:58:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:15.594 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port 10c1e97a-5615-4a75-8a3f-0ba852041cd8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 04:58:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:15.594 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:58:15 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:15.596 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[3a244e83-19ee-4903-adb8-e03361a7441d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:58:15 localhost journal[229367]: ethtool ioctl error on tapfa9247c7-4d: No such device Feb 20 04:58:15 localhost journal[229367]: ethtool ioctl error on tapfa9247c7-4d: No such device Feb 20 04:58:15 localhost ovn_controller[155916]: 2026-02-20T09:58:15Z|00241|binding|INFO|Setting lport fa9247c7-4dd4-4be0-89b9-1383d7c52286 ovn-installed in OVS Feb 20 04:58:15 localhost ovn_controller[155916]: 2026-02-20T09:58:15Z|00242|binding|INFO|Setting lport fa9247c7-4dd4-4be0-89b9-1383d7c52286 up in Southbound Feb 20 04:58:15 localhost journal[229367]: ethtool ioctl error on tapfa9247c7-4d: No such device Feb 20 04:58:15 localhost nova_compute[280804]: 2026-02-20 09:58:15.620 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:15 localhost journal[229367]: ethtool ioctl error on tapfa9247c7-4d: No such device Feb 20 04:58:15 localhost journal[229367]: ethtool ioctl error on tapfa9247c7-4d: No such device Feb 20 04:58:15 localhost journal[229367]: ethtool ioctl error on tapfa9247c7-4d: No such device Feb 20 04:58:15 localhost journal[229367]: ethtool ioctl error on tapfa9247c7-4d: No such device Feb 20 04:58:15 localhost journal[229367]: ethtool ioctl error on tapfa9247c7-4d: No such device Feb 20 04:58:15 localhost nova_compute[280804]: 2026-02-20 09:58:15.658 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:15 localhost nova_compute[280804]: 2026-02-20 09:58:15.687 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:15 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:15.697 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:58:15Z, description=, device_id=6c5c0d67-180d-4a72-8894-58efce22a7d0, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e230e818-2aad-42da-bf5b-064537041e34, ip_allocation=immediate, mac_address=fa:16:3e:ee:cd:ec, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2929, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:58:15Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:58:15 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:58:15 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:58:15 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:58:15 localhost podman[323064]: 2026-02-20 09:58:15.926998051 +0000 UTC m=+0.065438660 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 20 04:58:16 localhost podman[241347]: time="2026-02-20T09:58:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:58:16 localhost podman[241347]: @ - - [20/Feb/2026:09:58:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159544 "" "Go-http-client/1.1" Feb 20 04:58:16 localhost sshd[323097]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:58:16 localhost podman[241347]: @ - - [20/Feb/2026:09:58:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19252 "" "Go-http-client/1.1" Feb 20 04:58:16 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:16.283 263745 INFO neutron.agent.dhcp.agent [None req-3f785288-dd39-41b9-adf4-5789e74d26be - - - - - -] DHCP configuration for ports {'e230e818-2aad-42da-bf5b-064537041e34'} is completed#033[00m Feb 20 04:58:16 localhost podman[323124]: Feb 20 04:58:16 localhost podman[323124]: 2026-02-20 09:58:16.654115355 +0000 UTC m=+0.095120009 container create 05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 20 04:58:16 localhost podman[323124]: 2026-02-20 09:58:16.606424527 +0000 UTC m=+0.047429201 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:58:16 localhost systemd[1]: Started libpod-conmon-05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6.scope. Feb 20 04:58:16 localhost systemd[1]: Started libcrun container. Feb 20 04:58:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a43e55f210744c2d1999585b3c57e47628ce1b2756f8fa25eceede4d66aa335/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:58:16 localhost podman[323124]: 2026-02-20 09:58:16.744463516 +0000 UTC m=+0.185468170 container init 05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 20 04:58:16 localhost podman[323124]: 2026-02-20 09:58:16.754199265 +0000 UTC m=+0.195203919 container start 05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 04:58:16 localhost dnsmasq[323142]: started, version 2.85 cachesize 150 Feb 20 04:58:16 localhost dnsmasq[323142]: DNS service limited to local subnets Feb 20 04:58:16 localhost dnsmasq[323142]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:58:16 localhost dnsmasq[323142]: warning: no upstream servers configured Feb 20 04:58:16 localhost dnsmasq-dhcp[323142]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:58:16 localhost dnsmasq[323142]: read /var/lib/neutron/dhcp/e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b/addn_hosts - 0 addresses Feb 20 04:58:16 localhost dnsmasq-dhcp[323142]: read /var/lib/neutron/dhcp/e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b/host Feb 20 04:58:16 localhost dnsmasq-dhcp[323142]: read /var/lib/neutron/dhcp/e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b/opts Feb 20 04:58:16 localhost nova_compute[280804]: 2026-02-20 09:58:16.826 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:16 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:16.923 263745 INFO neutron.agent.dhcp.agent [None req-3f809687-b3cb-43ab-b879-420afbcf8bd0 - - - - - -] DHCP configuration for ports {'01f6f71e-88fa-4637-8505-84e304397e78'} is completed#033[00m Feb 20 04:58:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v403: 177 pgs: 177 active+clean; 196 MiB data, 968 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 22 KiB/s wr, 67 op/s Feb 20 04:58:17 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:17.789 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:58:17Z, description=, device_id=6c5c0d67-180d-4a72-8894-58efce22a7d0, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e1458954-9192-4daf-9f25-2dc4e84a32fc, ip_allocation=immediate, mac_address=fa:16:3e:27:ab:d9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:58:12Z, description=, dns_domain=, id=e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1932164864-network, port_security_enabled=True, project_id=2124592a7614499d8c7693cd6ba353de, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25489, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2912, status=ACTIVE, subnets=['7cb8716a-852e-4260-8541-ad00af7522ea'], tags=[], tenant_id=2124592a7614499d8c7693cd6ba353de, updated_at=2026-02-20T09:58:13Z, vlan_transparent=None, network_id=e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, port_security_enabled=False, project_id=2124592a7614499d8c7693cd6ba353de, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2933, status=DOWN, tags=[], tenant_id=2124592a7614499d8c7693cd6ba353de, updated_at=2026-02-20T09:58:17Z on network e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b#033[00m Feb 20 04:58:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:58:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e194 do_prune osdmap full prune enabled Feb 20 04:58:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e195 e195: 6 total, 6 up, 6 in Feb 20 04:58:17 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e195: 6 total, 6 up, 6 in Feb 20 04:58:17 localhost dnsmasq[323142]: read /var/lib/neutron/dhcp/e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b/addn_hosts - 1 addresses Feb 20 04:58:17 localhost dnsmasq-dhcp[323142]: read /var/lib/neutron/dhcp/e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b/host Feb 20 04:58:17 localhost dnsmasq-dhcp[323142]: read /var/lib/neutron/dhcp/e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b/opts Feb 20 04:58:17 localhost podman[323161]: 2026-02-20 09:58:17.997519687 +0000 UTC m=+0.073567226 container kill 05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:58:18 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:18.266 263745 INFO neutron.agent.dhcp.agent [None req-cf84afb4-8343-4da4-a3a1-2129c07d20cb - - - - - -] DHCP configuration for ports {'e1458954-9192-4daf-9f25-2dc4e84a32fc'} is completed#033[00m Feb 20 04:58:18 localhost nova_compute[280804]: 2026-02-20 09:58:18.474 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:58:18 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/11949244' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:58:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:58:18 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/11949244' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:58:18 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:18.698 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:58:17Z, description=, device_id=6c5c0d67-180d-4a72-8894-58efce22a7d0, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e1458954-9192-4daf-9f25-2dc4e84a32fc, ip_allocation=immediate, mac_address=fa:16:3e:27:ab:d9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:58:12Z, description=, dns_domain=, id=e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1932164864-network, port_security_enabled=True, project_id=2124592a7614499d8c7693cd6ba353de, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25489, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2912, status=ACTIVE, subnets=['7cb8716a-852e-4260-8541-ad00af7522ea'], tags=[], tenant_id=2124592a7614499d8c7693cd6ba353de, updated_at=2026-02-20T09:58:13Z, vlan_transparent=None, network_id=e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, port_security_enabled=False, project_id=2124592a7614499d8c7693cd6ba353de, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2933, status=DOWN, tags=[], tenant_id=2124592a7614499d8c7693cd6ba353de, updated_at=2026-02-20T09:58:17Z on network e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b#033[00m Feb 20 04:58:18 localhost dnsmasq[323142]: read /var/lib/neutron/dhcp/e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b/addn_hosts - 1 addresses Feb 20 04:58:18 localhost dnsmasq-dhcp[323142]: read /var/lib/neutron/dhcp/e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b/host Feb 20 04:58:18 localhost dnsmasq-dhcp[323142]: read /var/lib/neutron/dhcp/e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b/opts Feb 20 04:58:18 localhost podman[323198]: 2026-02-20 09:58:18.996497516 +0000 UTC m=+0.057489829 container kill 05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 20 04:58:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:58:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:58:19 localhost podman[323211]: 2026-02-20 09:58:19.111468892 +0000 UTC m=+0.082611707 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3) Feb 20 04:58:19 localhost podman[323213]: 2026-02-20 09:58:19.123011248 +0000 UTC m=+0.087642280 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent) Feb 20 04:58:19 localhost podman[323213]: 2026-02-20 09:58:19.158656005 +0000 UTC m=+0.123287057 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 04:58:19 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:58:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:58:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:19 localhost podman[323211]: 2026-02-20 09:58:19.224696051 +0000 UTC m=+0.195838816 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:58:19 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:58:19 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/.meta.tmp' Feb 20 04:58:19 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/.meta.tmp' to config b'/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/.meta' Feb 20 04:58:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "format": "json"}]: dispatch Feb 20 04:58:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:19 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:19.302 263745 INFO neutron.agent.dhcp.agent [None req-63144ecf-8cac-4346-b013-9a0cd9cfcdba - - - - - -] DHCP configuration for ports {'e1458954-9192-4daf-9f25-2dc4e84a32fc'} is completed#033[00m Feb 20 04:58:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f29b2b87-a99b-49d8-b340-b09eb6239fb8", "format": "json"}]: dispatch Feb 20 04:58:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f29b2b87-a99b-49d8-b340-b09eb6239fb8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f29b2b87-a99b-49d8-b340-b09eb6239fb8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:19 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:58:19.523+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f29b2b87-a99b-49d8-b340-b09eb6239fb8' of type subvolume Feb 20 04:58:19 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f29b2b87-a99b-49d8-b340-b09eb6239fb8' of type subvolume Feb 20 04:58:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f29b2b87-a99b-49d8-b340-b09eb6239fb8", "force": true, "format": "json"}]: dispatch Feb 20 04:58:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f29b2b87-a99b-49d8-b340-b09eb6239fb8, vol_name:cephfs) < "" Feb 20 04:58:19 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f29b2b87-a99b-49d8-b340-b09eb6239fb8'' moved to trashcan Feb 20 04:58:19 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:58:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f29b2b87-a99b-49d8-b340-b09eb6239fb8, vol_name:cephfs) < "" Feb 20 04:58:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 196 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 34 KiB/s wr, 89 op/s Feb 20 04:58:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:58:20 localhost nova_compute[280804]: 2026-02-20 09:58:20.444 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:20 localhost systemd[1]: tmp-crun.MEFMpv.mount: Deactivated successfully. Feb 20 04:58:20 localhost podman[323259]: 2026-02-20 09:58:20.460915634 +0000 UTC m=+0.101239292 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:58:20 localhost podman[323259]: 2026-02-20 09:58:20.469774969 +0000 UTC m=+0.110098627 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:58:20 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:58:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 196 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 96 KiB/s rd, 54 KiB/s wr, 134 op/s Feb 20 04:58:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:58:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:58:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 04:58:22 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:58:22 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:58:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:58:22 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:58:22 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:58:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:58:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:58:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e195 do_prune osdmap full prune enabled Feb 20 04:58:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e196 e196: 6 total, 6 up, 6 in Feb 20 04:58:22 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e196: 6 total, 6 up, 6 in Feb 20 04:58:22 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:58:22 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:58:22 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:58:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:58:23 Feb 20 04:58:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:58:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 04:58:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['backups', 'manila_metadata', 'volumes', 'vms', 'manila_data', '.mgr', 'images'] Feb 20 04:58:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 04:58:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:58:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:58:23 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ec39e814-8000-487e-8406-15e7edd35489", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:58:23 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec39e814-8000-487e-8406-15e7edd35489, vol_name:cephfs) < "" Feb 20 04:58:23 localhost nova_compute[280804]: 2026-02-20 09:58:23.506 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:58:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:58:23 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ec39e814-8000-487e-8406-15e7edd35489/.meta.tmp' Feb 20 04:58:23 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ec39e814-8000-487e-8406-15e7edd35489/.meta.tmp' to config b'/volumes/_nogroup/ec39e814-8000-487e-8406-15e7edd35489/.meta' Feb 20 04:58:23 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec39e814-8000-487e-8406-15e7edd35489, vol_name:cephfs) < "" Feb 20 04:58:23 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ec39e814-8000-487e-8406-15e7edd35489", "format": "json"}]: dispatch Feb 20 04:58:23 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec39e814-8000-487e-8406-15e7edd35489, vol_name:cephfs) < "" Feb 20 04:58:23 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec39e814-8000-487e-8406-15e7edd35489, vol_name:cephfs) < "" Feb 20 04:58:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:58:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:58:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 196 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 35 KiB/s wr, 75 op/s Feb 20 04:58:23 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:23.636 263745 INFO neutron.agent.linux.ip_lib [None req-79638e27-6310-488d-a6af-d48114fd8453 - - - - - -] Device tap43856041-b8 cannot be used as it has no MAC address#033[00m Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:58:23 localhost nova_compute[280804]: 2026-02-20 09:58:23.662 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:58:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:58:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:58:23 localhost kernel: device tap43856041-b8 entered promiscuous mode Feb 20 04:58:23 localhost NetworkManager[5967]: [1771581503.6724] manager: (tap43856041-b8): new Generic device (/org/freedesktop/NetworkManager/Devices/49) Feb 20 04:58:23 localhost ovn_controller[155916]: 2026-02-20T09:58:23Z|00243|binding|INFO|Claiming lport 43856041-b884-4c28-ae01-3616ccde3672 for this chassis. Feb 20 04:58:23 localhost ovn_controller[155916]: 2026-02-20T09:58:23Z|00244|binding|INFO|43856041-b884-4c28-ae01-3616ccde3672: Claiming unknown Feb 20 04:58:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:58:23 localhost nova_compute[280804]: 2026-02-20 09:58:23.672 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014866541910943606 of space, bias 1.0, pg target 0.2968352868218407 quantized to 32 (current 32) Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 20 04:58:23 localhost systemd-udevd[323291]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32) Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:58:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0002933504327191513 of space, bias 4.0, pg target 0.23350694444444445 quantized to 16 (current 16) Feb 20 04:58:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:58:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:58:23 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:23.686 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-58ba6ee5-8c2f-436a-a2a4-a024363b92a2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ba6ee5-8c2f-436a-a2a4-a024363b92a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80723b5de8af4075aa84c53e89b4d020', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd98e812-903d-42ba-a266-0f71d74a031c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=43856041-b884-4c28-ae01-3616ccde3672) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:58:23 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:23.688 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 43856041-b884-4c28-ae01-3616ccde3672 in datapath 58ba6ee5-8c2f-436a-a2a4-a024363b92a2 bound to our chassis#033[00m Feb 20 04:58:23 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:23.689 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 58ba6ee5-8c2f-436a-a2a4-a024363b92a2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:58:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:58:23 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:23.691 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[6eef20fb-9cda-4d2a-8867-934c3b6e7475]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:58:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:58:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:58:23 localhost journal[229367]: ethtool ioctl error on tap43856041-b8: No such device Feb 20 04:58:23 localhost journal[229367]: ethtool ioctl error on tap43856041-b8: No such device Feb 20 04:58:23 localhost ovn_controller[155916]: 2026-02-20T09:58:23Z|00245|binding|INFO|Setting lport 43856041-b884-4c28-ae01-3616ccde3672 ovn-installed in OVS Feb 20 04:58:23 localhost ovn_controller[155916]: 2026-02-20T09:58:23Z|00246|binding|INFO|Setting lport 43856041-b884-4c28-ae01-3616ccde3672 up in Southbound Feb 20 04:58:23 localhost nova_compute[280804]: 2026-02-20 09:58:23.716 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:23 localhost journal[229367]: ethtool ioctl error on tap43856041-b8: No such device Feb 20 04:58:23 localhost nova_compute[280804]: 2026-02-20 09:58:23.718 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:23 localhost journal[229367]: ethtool ioctl error on tap43856041-b8: No such device Feb 20 04:58:23 localhost journal[229367]: ethtool ioctl error on tap43856041-b8: No such device Feb 20 04:58:23 localhost journal[229367]: ethtool ioctl error on tap43856041-b8: No such device Feb 20 04:58:23 localhost journal[229367]: ethtool ioctl error on tap43856041-b8: No such device Feb 20 04:58:23 localhost journal[229367]: ethtool ioctl error on tap43856041-b8: No such device Feb 20 04:58:23 localhost nova_compute[280804]: 2026-02-20 09:58:23.756 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:23 localhost nova_compute[280804]: 2026-02-20 09:58:23.785 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:24 localhost ovn_controller[155916]: 2026-02-20T09:58:24Z|00247|binding|INFO|Removing iface tap43856041-b8 ovn-installed in OVS Feb 20 04:58:24 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:24.353 161766 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 410828b5-e904-4126-87d7-57351a48b4c9 with type ""#033[00m Feb 20 04:58:24 localhost ovn_controller[155916]: 2026-02-20T09:58:24Z|00248|binding|INFO|Removing lport 43856041-b884-4c28-ae01-3616ccde3672 ovn-installed in OVS Feb 20 04:58:24 localhost nova_compute[280804]: 2026-02-20 09:58:24.355 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:24 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:24.356 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-58ba6ee5-8c2f-436a-a2a4-a024363b92a2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ba6ee5-8c2f-436a-a2a4-a024363b92a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80723b5de8af4075aa84c53e89b4d020', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd98e812-903d-42ba-a266-0f71d74a031c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=43856041-b884-4c28-ae01-3616ccde3672) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:58:24 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:24.358 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 43856041-b884-4c28-ae01-3616ccde3672 in datapath 58ba6ee5-8c2f-436a-a2a4-a024363b92a2 unbound from our chassis#033[00m Feb 20 04:58:24 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:24.359 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 58ba6ee5-8c2f-436a-a2a4-a024363b92a2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:58:24 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:24.360 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[726b58d9-fdc3-4ef5-b054-6c8c67656fcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:58:24 localhost nova_compute[280804]: 2026-02-20 09:58:24.364 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:24 localhost nova_compute[280804]: 2026-02-20 09:58:24.511 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:24 localhost podman[323361]: Feb 20 04:58:24 localhost podman[323361]: 2026-02-20 09:58:24.613469303 +0000 UTC m=+0.077632925 container create c40941dcab33dfad46dfdc63014ea330dd8d66f3696d4c81666139f33b3ed4ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-58ba6ee5-8c2f-436a-a2a4-a024363b92a2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2) Feb 20 04:58:24 localhost systemd[1]: Started libpod-conmon-c40941dcab33dfad46dfdc63014ea330dd8d66f3696d4c81666139f33b3ed4ac.scope. Feb 20 04:58:24 localhost podman[323361]: 2026-02-20 09:58:24.570656364 +0000 UTC m=+0.034820006 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:58:24 localhost systemd[1]: Started libcrun container. Feb 20 04:58:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbfbfc5d8176452979cc4f2c2c6c0ef7f47a8f01108adea0046f42df27eb8dfa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:58:24 localhost podman[323361]: 2026-02-20 09:58:24.689196446 +0000 UTC m=+0.153360078 container init c40941dcab33dfad46dfdc63014ea330dd8d66f3696d4c81666139f33b3ed4ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-58ba6ee5-8c2f-436a-a2a4-a024363b92a2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Feb 20 04:58:24 localhost podman[323361]: 2026-02-20 09:58:24.698613835 +0000 UTC m=+0.162777457 container start c40941dcab33dfad46dfdc63014ea330dd8d66f3696d4c81666139f33b3ed4ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-58ba6ee5-8c2f-436a-a2a4-a024363b92a2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:58:24 localhost dnsmasq[323380]: started, version 2.85 cachesize 150 Feb 20 04:58:24 localhost dnsmasq[323380]: DNS service limited to local subnets Feb 20 04:58:24 localhost dnsmasq[323380]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:58:24 localhost dnsmasq[323380]: warning: no upstream servers configured Feb 20 04:58:24 localhost dnsmasq-dhcp[323380]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 04:58:24 localhost dnsmasq[323380]: read /var/lib/neutron/dhcp/58ba6ee5-8c2f-436a-a2a4-a024363b92a2/addn_hosts - 0 addresses Feb 20 04:58:24 localhost dnsmasq-dhcp[323380]: read /var/lib/neutron/dhcp/58ba6ee5-8c2f-436a-a2a4-a024363b92a2/host Feb 20 04:58:24 localhost dnsmasq-dhcp[323380]: read /var/lib/neutron/dhcp/58ba6ee5-8c2f-436a-a2a4-a024363b92a2/opts Feb 20 04:58:24 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:24.860 263745 INFO neutron.agent.dhcp.agent [None req-81d71ad6-85d3-4785-b5b6-f7c6c6b7d7d3 - - - - - -] DHCP configuration for ports {'f791374c-719e-4159-959a-6c1f973c38a4'} is completed#033[00m Feb 20 04:58:24 localhost dnsmasq[323380]: exiting on receipt of SIGTERM Feb 20 04:58:24 localhost podman[323398]: 2026-02-20 09:58:24.957874106 +0000 UTC m=+0.062907113 container kill c40941dcab33dfad46dfdc63014ea330dd8d66f3696d4c81666139f33b3ed4ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-58ba6ee5-8c2f-436a-a2a4-a024363b92a2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 04:58:24 localhost systemd[1]: libpod-c40941dcab33dfad46dfdc63014ea330dd8d66f3696d4c81666139f33b3ed4ac.scope: Deactivated successfully. Feb 20 04:58:25 localhost podman[323410]: 2026-02-20 09:58:25.041152758 +0000 UTC m=+0.064140875 container died c40941dcab33dfad46dfdc63014ea330dd8d66f3696d4c81666139f33b3ed4ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-58ba6ee5-8c2f-436a-a2a4-a024363b92a2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:58:25 localhost podman[323410]: 2026-02-20 09:58:25.073126619 +0000 UTC m=+0.096114696 container cleanup c40941dcab33dfad46dfdc63014ea330dd8d66f3696d4c81666139f33b3ed4ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-58ba6ee5-8c2f-436a-a2a4-a024363b92a2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 20 04:58:25 localhost systemd[1]: libpod-conmon-c40941dcab33dfad46dfdc63014ea330dd8d66f3696d4c81666139f33b3ed4ac.scope: Deactivated successfully. Feb 20 04:58:25 localhost podman[323412]: 2026-02-20 09:58:25.1122889 +0000 UTC m=+0.125684572 container remove c40941dcab33dfad46dfdc63014ea330dd8d66f3696d4c81666139f33b3ed4ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-58ba6ee5-8c2f-436a-a2a4-a024363b92a2, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 20 04:58:25 localhost nova_compute[280804]: 2026-02-20 09:58:25.171 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:25 localhost kernel: device tap43856041-b8 left promiscuous mode Feb 20 04:58:25 localhost nova_compute[280804]: 2026-02-20 09:58:25.190 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:58:25 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3466105401' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:58:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:58:25 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3466105401' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:58:25 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:25.231 263745 INFO neutron.agent.dhcp.agent [None req-321acce1-6fb0-49b3-b0a4-3118edf14073 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:58:25 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:25.232 263745 INFO neutron.agent.dhcp.agent [None req-321acce1-6fb0-49b3-b0a4-3118edf14073 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:58:25 localhost nova_compute[280804]: 2026-02-20 09:58:25.446 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v409: 177 pgs: 177 active+clean; 196 MiB data, 992 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 57 KiB/s wr, 130 op/s Feb 20 04:58:25 localhost systemd[1]: var-lib-containers-storage-overlay-fbfbfc5d8176452979cc4f2c2c6c0ef7f47a8f01108adea0046f42df27eb8dfa-merged.mount: Deactivated successfully. Feb 20 04:58:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c40941dcab33dfad46dfdc63014ea330dd8d66f3696d4c81666139f33b3ed4ac-userdata-shm.mount: Deactivated successfully. Feb 20 04:58:25 localhost systemd[1]: run-netns-qdhcp\x2d58ba6ee5\x2d8c2f\x2d436a\x2da2a4\x2da024363b92a2.mount: Deactivated successfully. Feb 20 04:58:25 localhost sshd[323439]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:58:25 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 04:58:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 04:58:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:58:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 20 04:58:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 04:58:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 04:58:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:25 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 04:58:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:25 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:58:25 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:58:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e196 do_prune osdmap full prune enabled Feb 20 04:58:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e197 e197: 6 total, 6 up, 6 in Feb 20 04:58:26 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e197: 6 total, 6 up, 6 in Feb 20 04:58:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:58:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 04:58:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 04:58:26 localhost ceph-mgr[286565]: [devicehealth INFO root] Check health Feb 20 04:58:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:58:26 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2603382454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:58:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:58:26 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2603382454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:58:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v411: 177 pgs: 177 active+clean; 196 MiB data, 992 MiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 48 KiB/s wr, 114 op/s Feb 20 04:58:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:58:27 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3400407797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:58:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:58:27 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3400407797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:58:27 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ec39e814-8000-487e-8406-15e7edd35489", "format": "json"}]: dispatch Feb 20 04:58:27 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ec39e814-8000-487e-8406-15e7edd35489, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:27 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ec39e814-8000-487e-8406-15e7edd35489, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:27 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:58:27.856+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec39e814-8000-487e-8406-15e7edd35489' of type subvolume Feb 20 04:58:27 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec39e814-8000-487e-8406-15e7edd35489' of type subvolume Feb 20 04:58:27 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ec39e814-8000-487e-8406-15e7edd35489", "force": true, "format": "json"}]: dispatch Feb 20 04:58:27 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec39e814-8000-487e-8406-15e7edd35489, vol_name:cephfs) < "" Feb 20 04:58:27 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ec39e814-8000-487e-8406-15e7edd35489'' moved to trashcan Feb 20 04:58:27 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:58:27 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec39e814-8000-487e-8406-15e7edd35489, vol_name:cephfs) < "" Feb 20 04:58:27 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:58:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:58:28 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1678028411' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:58:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:58:28 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1678028411' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:58:28 localhost openstack_network_exporter[243776]: ERROR 09:58:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:58:28 localhost openstack_network_exporter[243776]: Feb 20 04:58:28 localhost openstack_network_exporter[243776]: ERROR 09:58:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:58:28 localhost openstack_network_exporter[243776]: Feb 20 04:58:28 localhost nova_compute[280804]: 2026-02-20 09:58:28.508 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch Feb 20 04:58:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:58:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 04:58:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:58:29 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:58:29 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:58:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:58:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:58:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:58:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:58:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 196 MiB data, 992 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 58 op/s Feb 20 04:58:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:58:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:58:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:58:30 localhost nova_compute[280804]: 2026-02-20 09:58:30.451 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:30 localhost podman[323442]: 2026-02-20 09:58:30.458992661 +0000 UTC m=+0.094798820 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:58:30 localhost podman[323442]: 2026-02-20 09:58:30.496858228 +0000 UTC m=+0.132664337 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:58:30 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:58:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 196 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 104 KiB/s rd, 70 KiB/s wr, 148 op/s Feb 20 04:58:32 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:32.280 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:58:32Z, description=, device_id=6e190587-ab34-4300-b8f0-29e369b79c71, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=844dfcda-0bf1-4353-9ff9-3c5d7fa283da, ip_allocation=immediate, mac_address=fa:16:3e:d7:e5:05, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2992, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:58:32Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:58:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 04:58:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:32 localhost systemd[1]: tmp-crun.WuRynH.mount: Deactivated successfully. Feb 20 04:58:32 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 4 addresses Feb 20 04:58:32 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:58:32 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:58:32 localhost podman[323482]: 2026-02-20 09:58:32.547359062 +0000 UTC m=+0.082309059 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:58:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 04:58:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:58:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 20 04:58:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 04:58:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 04:58:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 04:58:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:32 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:58:32 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:58:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "92707d6b-2313-43a9-b4cc-78c6dfb385e9", "format": "json"}]: dispatch Feb 20 04:58:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:92707d6b-2313-43a9-b4cc-78c6dfb385e9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:92707d6b-2313-43a9-b4cc-78c6dfb385e9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:32 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:58:32.708+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '92707d6b-2313-43a9-b4cc-78c6dfb385e9' of type subvolume Feb 20 04:58:32 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '92707d6b-2313-43a9-b4cc-78c6dfb385e9' of type subvolume Feb 20 04:58:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "92707d6b-2313-43a9-b4cc-78c6dfb385e9", "force": true, "format": "json"}]: dispatch Feb 20 04:58:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:92707d6b-2313-43a9-b4cc-78c6dfb385e9, vol_name:cephfs) < "" Feb 20 04:58:32 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/92707d6b-2313-43a9-b4cc-78c6dfb385e9'' moved to trashcan Feb 20 04:58:32 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:58:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:92707d6b-2313-43a9-b4cc-78c6dfb385e9, vol_name:cephfs) < "" Feb 20 04:58:32 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:32.895 263745 INFO neutron.agent.dhcp.agent [None req-9afe423e-25dd-4bdb-a3d5-5f539b35182c - - - - - -] DHCP configuration for ports {'844dfcda-0bf1-4353-9ff9-3c5d7fa283da'} is completed#033[00m Feb 20 04:58:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:58:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e197 do_prune osdmap full prune enabled Feb 20 04:58:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e198 e198: 6 total, 6 up, 6 in Feb 20 04:58:32 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e198: 6 total, 6 up, 6 in Feb 20 04:58:33 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:58:33 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 04:58:33 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 04:58:33 localhost nova_compute[280804]: 2026-02-20 09:58:33.548 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v415: 177 pgs: 177 active+clean; 196 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 75 KiB/s rd, 53 KiB/s wr, 105 op/s Feb 20 04:58:33 localhost nova_compute[280804]: 2026-02-20 09:58:33.963 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:35 localhost nova_compute[280804]: 2026-02-20 09:58:35.455 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 197 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 63 KiB/s wr, 122 op/s Feb 20 04:58:35 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:58:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:58:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 04:58:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:58:35 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice_bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:58:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:58:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:58:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:58:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:58:36 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:58:36 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:58:36 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:58:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v417: 177 pgs: 177 active+clean; 197 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 81 KiB/s rd, 60 KiB/s wr, 116 op/s Feb 20 04:58:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:58:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:58:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:58:38 localhost podman[323505]: 2026-02-20 09:58:38.447829492 +0000 UTC m=+0.080578543 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, maintainer=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter) Feb 20 04:58:38 localhost podman[323505]: 2026-02-20 09:58:38.45981431 +0000 UTC m=+0.092563451 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=openstack_network_exporter) Feb 20 04:58:38 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:58:38 localhost systemd[1]: tmp-crun.UZb5sS.mount: Deactivated successfully. Feb 20 04:58:38 localhost podman[323506]: 2026-02-20 09:58:38.505373832 +0000 UTC m=+0.134321511 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 20 04:58:38 localhost podman[323506]: 2026-02-20 09:58:38.539738915 +0000 UTC m=+0.168686614 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, managed_by=edpm_ansible) Feb 20 04:58:38 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:58:38 localhost nova_compute[280804]: 2026-02-20 09:58:38.595 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:38 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 04:58:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 04:58:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:58:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 20 04:58:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 04:58:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 04:58:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 04:58:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:58:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:58:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:39 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:39.205 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:58:38Z, description=, device_id=7b11a9b1-00cf-4de3-addd-57993750a1cc, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b0d6c0a3-3cdb-4723-9454-ff2df4261057, ip_allocation=immediate, mac_address=fa:16:3e:24:fd:3d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3011, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T09:58:38Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 04:58:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:58:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 04:58:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 04:58:39 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 5 addresses Feb 20 04:58:39 localhost podman[323561]: 2026-02-20 09:58:39.434578405 +0000 UTC m=+0.067371921 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 20 04:58:39 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:58:39 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:58:39 localhost systemd[1]: tmp-crun.XxmBFO.mount: Deactivated successfully. Feb 20 04:58:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 197 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 60 KiB/s wr, 113 op/s Feb 20 04:58:39 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:39.684 263745 INFO neutron.agent.dhcp.agent [None req-429451b1-e680-4656-a43e-e85f4bff3fa9 - - - - - -] DHCP configuration for ports {'b0d6c0a3-3cdb-4723-9454-ff2df4261057'} is completed#033[00m Feb 20 04:58:40 localhost nova_compute[280804]: 2026-02-20 09:58:40.458 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:40 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 4 addresses Feb 20 04:58:40 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:58:40 localhost podman[323597]: 2026-02-20 09:58:40.970676209 +0000 UTC m=+0.064846735 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 04:58:40 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:58:41 localhost nova_compute[280804]: 2026-02-20 09:58:41.295 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 197 MiB data, 998 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 49 KiB/s wr, 36 op/s Feb 20 04:58:41 localhost nova_compute[280804]: 2026-02-20 09:58:41.639 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch Feb 20 04:58:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:58:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 04:58:42 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:58:42 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice_bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:58:42 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:58:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:58:42 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:58:42 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:58:42 localhost nova_compute[280804]: 2026-02-20 09:58:42.408 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:58:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:58:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v420: 177 pgs: 177 active+clean; 197 MiB data, 998 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 46 KiB/s wr, 34 op/s Feb 20 04:58:43 localhost nova_compute[280804]: 2026-02-20 09:58:43.640 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:43 localhost dnsmasq[322860]: exiting on receipt of SIGTERM Feb 20 04:58:43 localhost podman[323706]: 2026-02-20 09:58:43.84444435 +0000 UTC m=+0.068987283 container kill fa73ff61512f94b4e9fe2ee8fd50032bd09a18ae8cacb65cc8ef221911dc934d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-638a66c9-a568-47e2-b955-e5be1d4001c9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 20 04:58:43 localhost systemd[1]: libpod-fa73ff61512f94b4e9fe2ee8fd50032bd09a18ae8cacb65cc8ef221911dc934d.scope: Deactivated successfully. Feb 20 04:58:43 localhost podman[323727]: 2026-02-20 09:58:43.930251781 +0000 UTC m=+0.068331726 container died fa73ff61512f94b4e9fe2ee8fd50032bd09a18ae8cacb65cc8ef221911dc934d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-638a66c9-a568-47e2-b955-e5be1d4001c9, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 04:58:43 localhost podman[323727]: 2026-02-20 09:58:43.965746605 +0000 UTC m=+0.103826520 container cleanup fa73ff61512f94b4e9fe2ee8fd50032bd09a18ae8cacb65cc8ef221911dc934d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-638a66c9-a568-47e2-b955-e5be1d4001c9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 20 04:58:43 localhost systemd[1]: libpod-conmon-fa73ff61512f94b4e9fe2ee8fd50032bd09a18ae8cacb65cc8ef221911dc934d.scope: Deactivated successfully. Feb 20 04:58:44 localhost podman[323734]: 2026-02-20 09:58:44.004905665 +0000 UTC m=+0.133688524 container remove fa73ff61512f94b4e9fe2ee8fd50032bd09a18ae8cacb65cc8ef221911dc934d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-638a66c9-a568-47e2-b955-e5be1d4001c9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127) Feb 20 04:58:44 localhost kernel: device tap0b2939b2-e4 left promiscuous mode Feb 20 04:58:44 localhost nova_compute[280804]: 2026-02-20 09:58:44.017 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:44 localhost ovn_controller[155916]: 2026-02-20T09:58:44Z|00249|binding|INFO|Releasing lport 0b2939b2-e441-4b67-9267-d403a108ff43 from this chassis (sb_readonly=0) Feb 20 04:58:44 localhost ovn_controller[155916]: 2026-02-20T09:58:44Z|00250|binding|INFO|Setting lport 0b2939b2-e441-4b67-9267-d403a108ff43 down in Southbound Feb 20 04:58:44 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:44.027 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.242/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-638a66c9-a568-47e2-b955-e5be1d4001c9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-638a66c9-a568-47e2-b955-e5be1d4001c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80723b5de8af4075aa84c53e89b4d020', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e615dd7-10c9-4b11-b8c2-d3bb2afa00db, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0b2939b2-e441-4b67-9267-d403a108ff43) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:58:44 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:44.029 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 0b2939b2-e441-4b67-9267-d403a108ff43 in datapath 638a66c9-a568-47e2-b955-e5be1d4001c9 unbound from our chassis#033[00m Feb 20 04:58:44 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:44.032 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 638a66c9-a568-47e2-b955-e5be1d4001c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:58:44 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:44.033 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[8394ea2e-d2de-4a72-acb6-a8e08d212fc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:58:44 localhost nova_compute[280804]: 2026-02-20 09:58:44.037 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:44 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:58:44 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:58:44 localhost podman[323776]: 2026-02-20 09:58:44.163290274 +0000 UTC m=+0.108309019 container exec 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, vcs-type=git, build-date=2026-02-09T10:25:24Z, ceph=True, io.openshift.expose-services=, architecture=x86_64, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2) Feb 20 04:58:44 localhost podman[323776]: 2026-02-20 09:58:44.277831358 +0000 UTC m=+0.222850113 container exec_died 17bcd3d9a6436ea04160ed4c4e04d839e51c8678402dc4e38301f79a98b3dc30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625202, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True, io.buildah.version=1.42.2, vcs-type=git, version=7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7) Feb 20 04:58:44 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:44.328 263745 INFO neutron.agent.dhcp.agent [None req-4ce68a64-386a-4eb1-9f7e-7fa6232f15f2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:58:44 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:44.412 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:58:44 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:44.836 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:58:44 localhost systemd[1]: var-lib-containers-storage-overlay-4b8f30d7f893bcdc46619072b801c25e7f37a704ef45b14d4ceff4ecbba6ebac-merged.mount: Deactivated successfully. Feb 20 04:58:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa73ff61512f94b4e9fe2ee8fd50032bd09a18ae8cacb65cc8ef221911dc934d-userdata-shm.mount: Deactivated successfully. Feb 20 04:58:44 localhost systemd[1]: run-netns-qdhcp\x2d638a66c9\x2da568\x2d47e2\x2db955\x2de5be1d4001c9.mount: Deactivated successfully. Feb 20 04:58:44 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 04:58:44 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:44 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 04:58:44 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:44 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 04:58:44 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:44 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 04:58:45 localhost podman[323914]: 2026-02-20 09:58:45.010138951 +0000 UTC m=+0.064631050 container kill 05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Feb 20 04:58:45 localhost dnsmasq[323142]: read /var/lib/neutron/dhcp/e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b/addn_hosts - 0 addresses Feb 20 04:58:45 localhost dnsmasq-dhcp[323142]: read /var/lib/neutron/dhcp/e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b/host Feb 20 04:58:45 localhost dnsmasq-dhcp[323142]: read /var/lib/neutron/dhcp/e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b/opts Feb 20 04:58:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 04:58:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 04:58:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:45 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:45 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:45 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:45 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:45 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:45 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:45 localhost kernel: device tapfa9247c7-4d left promiscuous mode Feb 20 04:58:45 localhost nova_compute[280804]: 2026-02-20 09:58:45.217 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:45 localhost ovn_controller[155916]: 2026-02-20T09:58:45Z|00251|binding|INFO|Releasing lport fa9247c7-4dd4-4be0-89b9-1383d7c52286 from this chassis (sb_readonly=0) Feb 20 04:58:45 localhost ovn_controller[155916]: 2026-02-20T09:58:45Z|00252|binding|INFO|Setting lport fa9247c7-4dd4-4be0-89b9-1383d7c52286 down in Southbound Feb 20 04:58:45 localhost nova_compute[280804]: 2026-02-20 09:58:45.245 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:45 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:45.307 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2124592a7614499d8c7693cd6ba353de', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28b11d60-f827-42f7-aa93-3b723a6711dd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fa9247c7-4dd4-4be0-89b9-1383d7c52286) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:58:45 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:45.309 161766 INFO neutron.agent.ovn.metadata.agent [-] Port fa9247c7-4dd4-4be0-89b9-1383d7c52286 in datapath e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b unbound from our chassis#033[00m Feb 20 04:58:45 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:45.311 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:58:45 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:45.312 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[3a09c07c-79f0-4d61-b560-b6a02a4618eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:58:45 localhost nova_compute[280804]: 2026-02-20 09:58:45.388 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:45 localhost nova_compute[280804]: 2026-02-20 09:58:45.460 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:45 localhost sshd[323988]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:58:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v421: 177 pgs: 177 active+clean; 197 MiB data, 999 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 54 KiB/s wr, 33 op/s Feb 20 04:58:45 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 04:58:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Feb 20 04:58:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:58:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Feb 20 04:58:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:58:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Feb 20 04:58:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:58:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Feb 20 04:58:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:58:45 localhost ceph-mgr[286565]: [cephadm INFO root] Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:58:45 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:58:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:58:45 localhost ceph-mgr[286565]: [cephadm INFO root] Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:58:45 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:58:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:58:45 localhost ceph-mgr[286565]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:58:45 localhost ceph-mgr[286565]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:58:45 localhost ceph-mgr[286565]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:58:45 localhost ceph-mgr[286565]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:58:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 04:58:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:58:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 20 04:58:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 04:58:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 04:58:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:46 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 04:58:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Feb 20 04:58:46 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:58:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Feb 20 04:58:46 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:58:46 localhost ceph-mgr[286565]: [cephadm INFO root] Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:58:46 localhost ceph-mgr[286565]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:58:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Feb 20 04:58:46 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:58:46 localhost ceph-mgr[286565]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:58:46 localhost ceph-mgr[286565]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:58:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:58:46 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:58:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:58:46 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:58:46 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:58:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:58:46 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:46 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 1614a4aa-fdd7-421e-864e-789037b7aee6 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:58:46 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 1614a4aa-fdd7-421e-864e-789037b7aee6 (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:58:46 localhost ceph-mgr[286565]: [progress INFO root] Completed event 1614a4aa-fdd7-421e-864e-789037b7aee6 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:58:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:58:46 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:58:46 localhost podman[241347]: time="2026-02-20T09:58:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:58:46 localhost podman[241347]: @ - - [20/Feb/2026:09:58:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159540 "" "Go-http-client/1.1" Feb 20 04:58:46 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Feb 20 04:58:46 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Feb 20 04:58:46 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Feb 20 04:58:46 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Feb 20 04:58:46 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:58:46 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 04:58:46 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 04:58:46 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Feb 20 04:58:46 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Feb 20 04:58:46 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:58:46 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:46 localhost podman[241347]: @ - - [20/Feb/2026:09:58:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19255 "" "Go-http-client/1.1" Feb 20 04:58:46 localhost sshd[324026]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:58:47 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M Feb 20 04:58:47 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M Feb 20 04:58:47 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:58:47 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:58:47 localhost ceph-mon[292786]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M Feb 20 04:58:47 localhost ceph-mon[292786]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Feb 20 04:58:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 197 MiB data, 999 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 39 KiB/s wr, 6 op/s Feb 20 04:58:47 localhost nova_compute[280804]: 2026-02-20 09:58:47.615 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:58:47 localhost nova_compute[280804]: 2026-02-20 09:58:47.982 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:47 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 04:58:47 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:58:47 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:58:47 localhost podman[324043]: 2026-02-20 09:58:47.990474075 +0000 UTC m=+0.075041105 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127) Feb 20 04:58:48 localhost dnsmasq[323142]: exiting on receipt of SIGTERM Feb 20 04:58:48 localhost podman[324078]: 2026-02-20 09:58:48.588885568 +0000 UTC m=+0.065933182 container kill 05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127) Feb 20 04:58:48 localhost systemd[1]: libpod-05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6.scope: Deactivated successfully. Feb 20 04:58:48 localhost nova_compute[280804]: 2026-02-20 09:58:48.676 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:48 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:58:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:58:48 localhost podman[324091]: 2026-02-20 09:58:48.683659837 +0000 UTC m=+0.079950546 container died 05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 20 04:58:48 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:48 localhost podman[324091]: 2026-02-20 09:58:48.72967631 +0000 UTC m=+0.125966949 container cleanup 05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 04:58:48 localhost systemd[1]: libpod-conmon-05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6.scope: Deactivated successfully. Feb 20 04:58:48 localhost podman[324092]: 2026-02-20 09:58:48.770721371 +0000 UTC m=+0.160421254 container remove 05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2aad3d8-1c6f-4b34-a0b8-e5dbec28572b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS) Feb 20 04:58:48 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:48.844 263745 INFO neutron.agent.dhcp.agent [None req-ec979e6e-fe30-49e8-95d6-0e58b5065016 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:58:48 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:58:48.857 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:58:48 localhost systemd[1]: var-lib-containers-storage-overlay-6a43e55f210744c2d1999585b3c57e47628ce1b2756f8fa25eceede4d66aa335-merged.mount: Deactivated successfully. Feb 20 04:58:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05473327795b5b7451b22b0c2f54b83d7e65022492692875be7219753eedb2c6-userdata-shm.mount: Deactivated successfully. Feb 20 04:58:48 localhost systemd[1]: run-netns-qdhcp\x2de2aad3d8\x2d1c6f\x2d4b34\x2da0b8\x2de5dbec28572b.mount: Deactivated successfully. Feb 20 04:58:49 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:58:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:58:49 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 04:58:49 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:58:49 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:58:49 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:58:49 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:58:49 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:58:49 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:58:49 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:58:49 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:58:49 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:58:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:58:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:58:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:58:49 localhost podman[324119]: 2026-02-20 09:58:49.466205064 +0000 UTC m=+0.095670834 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 20 04:58:49 localhost podman[324119]: 2026-02-20 09:58:49.510822839 +0000 UTC m=+0.140288679 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 20 04:58:49 localhost systemd[1]: tmp-crun.zMy4vJ.mount: Deactivated successfully. Feb 20 04:58:49 localhost podman[324120]: 2026-02-20 09:58:49.522841458 +0000 UTC m=+0.150276974 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127) Feb 20 04:58:49 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:58:49 localhost podman[324120]: 2026-02-20 09:58:49.534806597 +0000 UTC m=+0.162242113 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0) Feb 20 04:58:49 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:58:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 197 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 39 KiB/s wr, 7 op/s Feb 20 04:58:50 localhost nova_compute[280804]: 2026-02-20 09:58:50.464 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:58:51 localhost podman[324163]: 2026-02-20 09:58:51.449218834 +0000 UTC m=+0.087302541 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:58:51 localhost podman[324163]: 2026-02-20 09:58:51.459153488 +0000 UTC m=+0.097237185 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:58:51 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:58:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v424: 177 pgs: 177 active+clean; 253 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 4.7 MiB/s wr, 37 op/s Feb 20 04:58:52 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 04:58:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 04:58:52 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:58:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 20 04:58:52 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 04:58:52 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 04:58:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:52 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 04:58:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:52 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:58:52 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:58:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:52 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bc980eec-f143-463a-b3c7-dd1de3420de8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:58:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bc980eec-f143-463a-b3c7-dd1de3420de8, vol_name:cephfs) < "" Feb 20 04:58:52 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bc980eec-f143-463a-b3c7-dd1de3420de8/.meta.tmp' Feb 20 04:58:52 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bc980eec-f143-463a-b3c7-dd1de3420de8/.meta.tmp' to config b'/volumes/_nogroup/bc980eec-f143-463a-b3c7-dd1de3420de8/.meta' Feb 20 04:58:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bc980eec-f143-463a-b3c7-dd1de3420de8, vol_name:cephfs) < "" Feb 20 04:58:52 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bc980eec-f143-463a-b3c7-dd1de3420de8", "format": "json"}]: dispatch Feb 20 04:58:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bc980eec-f143-463a-b3c7-dd1de3420de8, vol_name:cephfs) < "" Feb 20 04:58:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bc980eec-f143-463a-b3c7-dd1de3420de8, vol_name:cephfs) < "" Feb 20 04:58:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:58:53 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:58:53 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 04:58:53 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 04:58:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:58:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:58:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:58:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:58:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:58:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:58:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 253 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 4.7 MiB/s wr, 34 op/s Feb 20 04:58:53 localhost nova_compute[280804]: 2026-02-20 09:58:53.716 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:55 localhost nova_compute[280804]: 2026-02-20 09:58:55.467 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 177 active+clean; 436 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 1.8 MiB/s rd, 18 MiB/s wr, 85 op/s Feb 20 04:58:55 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch Feb 20 04:58:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:58:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 04:58:55 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:58:55 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:58:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:58:56 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:58:56 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:58:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:58:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 04:58:56 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1695766244' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 04:58:56 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:58:56 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:58:56 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:58:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e9bd40a1-418b-4133-bfcb-60a8a2bd8eed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:58:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e9bd40a1-418b-4133-bfcb-60a8a2bd8eed, vol_name:cephfs) < "" Feb 20 04:58:56 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e9bd40a1-418b-4133-bfcb-60a8a2bd8eed/.meta.tmp' Feb 20 04:58:56 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e9bd40a1-418b-4133-bfcb-60a8a2bd8eed/.meta.tmp' to config b'/volumes/_nogroup/e9bd40a1-418b-4133-bfcb-60a8a2bd8eed/.meta' Feb 20 04:58:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e9bd40a1-418b-4133-bfcb-60a8a2bd8eed, vol_name:cephfs) < "" Feb 20 04:58:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e9bd40a1-418b-4133-bfcb-60a8a2bd8eed", "format": "json"}]: dispatch Feb 20 04:58:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e9bd40a1-418b-4133-bfcb-60a8a2bd8eed, vol_name:cephfs) < "" Feb 20 04:58:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e9bd40a1-418b-4133-bfcb-60a8a2bd8eed, vol_name:cephfs) < "" Feb 20 04:58:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e198 do_prune osdmap full prune enabled Feb 20 04:58:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e199 e199: 6 total, 6 up, 6 in Feb 20 04:58:57 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e199: 6 total, 6 up, 6 in Feb 20 04:58:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 177 active+clean; 436 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 2.1 MiB/s rd, 21 MiB/s wr, 99 op/s Feb 20 04:58:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:58:58 localhost openstack_network_exporter[243776]: ERROR 09:58:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:58:58 localhost openstack_network_exporter[243776]: Feb 20 04:58:58 localhost openstack_network_exporter[243776]: ERROR 09:58:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:58:58 localhost openstack_network_exporter[243776]: Feb 20 04:58:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e199 do_prune osdmap full prune enabled Feb 20 04:58:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e200 e200: 6 total, 6 up, 6 in Feb 20 04:58:58 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e200: 6 total, 6 up, 6 in Feb 20 04:58:58 localhost sshd[324187]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:58:58 localhost nova_compute[280804]: 2026-02-20 09:58:58.756 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:59 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:59.060 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:58:59 localhost nova_compute[280804]: 2026-02-20 09:58:59.061 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:58:59 localhost ovn_metadata_agent[161761]: 2026-02-20 09:58:59.061 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 04:58:59 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 04:58:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:59 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 04:58:59 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:58:59 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 20 04:58:59 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 04:58:59 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 04:58:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:59 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 04:58:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:58:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:58:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:58:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 469 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 878 KiB/s rd, 24 MiB/s wr, 80 op/s Feb 20 04:58:59 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e9bd40a1-418b-4133-bfcb-60a8a2bd8eed", "format": "json"}]: dispatch Feb 20 04:58:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e9bd40a1-418b-4133-bfcb-60a8a2bd8eed, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e9bd40a1-418b-4133-bfcb-60a8a2bd8eed, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:58:59 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:58:59.839+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e9bd40a1-418b-4133-bfcb-60a8a2bd8eed' of type subvolume Feb 20 04:58:59 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e9bd40a1-418b-4133-bfcb-60a8a2bd8eed' of type subvolume Feb 20 04:58:59 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e9bd40a1-418b-4133-bfcb-60a8a2bd8eed", "force": true, "format": "json"}]: dispatch Feb 20 04:58:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e9bd40a1-418b-4133-bfcb-60a8a2bd8eed, vol_name:cephfs) < "" Feb 20 04:58:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e9bd40a1-418b-4133-bfcb-60a8a2bd8eed'' moved to trashcan Feb 20 04:58:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:58:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e9bd40a1-418b-4133-bfcb-60a8a2bd8eed, vol_name:cephfs) < "" Feb 20 04:59:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:59:00.284 263745 INFO neutron.agent.linux.ip_lib [None req-86929d08-6eb0-4834-b086-623d70f35558 - - - - - -] Device tap5871c06f-3c cannot be used as it has no MAC address#033[00m Feb 20 04:59:00 localhost nova_compute[280804]: 2026-02-20 09:59:00.310 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:00 localhost kernel: device tap5871c06f-3c entered promiscuous mode Feb 20 04:59:00 localhost nova_compute[280804]: 2026-02-20 09:59:00.318 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:00 localhost NetworkManager[5967]: [1771581540.3214] manager: (tap5871c06f-3c): new Generic device (/org/freedesktop/NetworkManager/Devices/50) Feb 20 04:59:00 localhost ovn_controller[155916]: 2026-02-20T09:59:00Z|00253|binding|INFO|Claiming lport 5871c06f-3c3e-4e74-ac99-58a2a68de443 for this chassis. Feb 20 04:59:00 localhost ovn_controller[155916]: 2026-02-20T09:59:00Z|00254|binding|INFO|5871c06f-3c3e-4e74-ac99-58a2a68de443: Claiming unknown Feb 20 04:59:00 localhost systemd-udevd[324200]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:59:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:00.332 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-938f3422-8b96-48d1-9534-9457642bcd81', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-938f3422-8b96-48d1-9534-9457642bcd81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db4c2cde6adc4016a4bb7c41aa8e59c8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4718fadb-b43d-48b2-a22e-2d63028e4364, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5871c06f-3c3e-4e74-ac99-58a2a68de443) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:59:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:00.335 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 5871c06f-3c3e-4e74-ac99-58a2a68de443 in datapath 938f3422-8b96-48d1-9534-9457642bcd81 bound to our chassis#033[00m Feb 20 04:59:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:00.337 161766 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 938f3422-8b96-48d1-9534-9457642bcd81 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Feb 20 04:59:00 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:00.338 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8426d2-faec-4ec1-993c-4da14d6b4c54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:00 localhost journal[229367]: ethtool ioctl error on tap5871c06f-3c: No such device Feb 20 04:59:00 localhost ovn_controller[155916]: 2026-02-20T09:59:00Z|00255|binding|INFO|Setting lport 5871c06f-3c3e-4e74-ac99-58a2a68de443 ovn-installed in OVS Feb 20 04:59:00 localhost ovn_controller[155916]: 2026-02-20T09:59:00Z|00256|binding|INFO|Setting lport 5871c06f-3c3e-4e74-ac99-58a2a68de443 up in Southbound Feb 20 04:59:00 localhost nova_compute[280804]: 2026-02-20 09:59:00.359 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:00 localhost journal[229367]: ethtool ioctl error on tap5871c06f-3c: No such device Feb 20 04:59:00 localhost journal[229367]: ethtool ioctl error on tap5871c06f-3c: No such device Feb 20 04:59:00 localhost journal[229367]: ethtool ioctl error on tap5871c06f-3c: No such device Feb 20 04:59:00 localhost journal[229367]: ethtool ioctl error on tap5871c06f-3c: No such device Feb 20 04:59:00 localhost journal[229367]: ethtool ioctl error on tap5871c06f-3c: No such device Feb 20 04:59:00 localhost journal[229367]: ethtool ioctl error on tap5871c06f-3c: No such device Feb 20 04:59:00 localhost journal[229367]: ethtool ioctl error on tap5871c06f-3c: No such device Feb 20 04:59:00 localhost nova_compute[280804]: 2026-02-20 09:59:00.395 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:00 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:59:00 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 04:59:00 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 04:59:00 localhost nova_compute[280804]: 2026-02-20 09:59:00.428 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0. Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.438809) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55 Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581540438887, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2565, "num_deletes": 266, "total_data_size": 2819553, "memory_usage": 2868280, "flush_reason": "Manual Compaction"} Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581540456022, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2755961, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30216, "largest_seqno": 32780, "table_properties": {"data_size": 2744524, "index_size": 7302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27174, "raw_average_key_size": 22, "raw_value_size": 2720547, "raw_average_value_size": 2257, "num_data_blocks": 307, "num_entries": 1205, "num_filter_entries": 1205, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581414, "oldest_key_time": 1771581414, "file_creation_time": 1771581540, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}} Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 17251 microseconds, and 10595 cpu microseconds. Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.456070) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2755961 bytes OK Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.456093) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.458205) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.458220) EVENT_LOG_v1 {"time_micros": 1771581540458216, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.458243) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2808136, prev total WAL file size 2808136, number of live WAL files 2. Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.458996) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end) Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2691KB)], [54(17MB)] Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581540459058, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 20730333, "oldest_snapshot_seqno": -1} Feb 20 04:59:00 localhost nova_compute[280804]: 2026-02-20 09:59:00.469 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 13382 keys, 19529701 bytes, temperature: kUnknown Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581540586889, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 19529701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19449658, "index_size": 45510, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33477, "raw_key_size": 356146, "raw_average_key_size": 26, "raw_value_size": 19218511, "raw_average_value_size": 1436, "num_data_blocks": 1741, "num_entries": 13382, "num_filter_entries": 13382, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771581540, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}} Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.587271) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 19529701 bytes Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.591329) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.0 rd, 152.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 17.1 +0.0 blob) out(18.6 +0.0 blob), read-write-amplify(14.6) write-amplify(7.1) OK, records in: 13935, records dropped: 553 output_compression: NoCompression Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.591362) EVENT_LOG_v1 {"time_micros": 1771581540591347, "job": 32, "event": "compaction_finished", "compaction_time_micros": 127949, "compaction_time_cpu_micros": 43034, "output_level": 6, "num_output_files": 1, "total_output_size": 19529701, "num_input_records": 13935, "num_output_records": 13382, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581540591959, "job": 32, "event": "table_file_deletion", "file_number": 56} Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581540594969, "job": 32, "event": "table_file_deletion", "file_number": 54} Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.458930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.595044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.595054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.595058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.595061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:59:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:00.595064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:59:01 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:01.063 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:59:01 localhost podman[324271]: Feb 20 04:59:01 localhost podman[324271]: 2026-02-20 09:59:01.290080833 +0000 UTC m=+0.095874958 container create eb05783ed3a986d962ea49e14561ab37f6acb50838d37dcc71f650234bf290b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-938f3422-8b96-48d1-9534-9457642bcd81, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:59:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:59:01 localhost systemd[1]: Started libpod-conmon-eb05783ed3a986d962ea49e14561ab37f6acb50838d37dcc71f650234bf290b9.scope. Feb 20 04:59:01 localhost podman[324271]: 2026-02-20 09:59:01.240831574 +0000 UTC m=+0.046625729 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 04:59:01 localhost systemd[1]: Started libcrun container. Feb 20 04:59:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924de2053094ad4f3fca0f69cfd3f035725317378636eb69bca17ac2692cc619/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:59:01 localhost podman[324271]: 2026-02-20 09:59:01.379087019 +0000 UTC m=+0.184881154 container init eb05783ed3a986d962ea49e14561ab37f6acb50838d37dcc71f650234bf290b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-938f3422-8b96-48d1-9534-9457642bcd81, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Feb 20 04:59:01 localhost podman[324271]: 2026-02-20 09:59:01.39116958 +0000 UTC m=+0.196963715 container start eb05783ed3a986d962ea49e14561ab37f6acb50838d37dcc71f650234bf290b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-938f3422-8b96-48d1-9534-9457642bcd81, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:59:01 localhost dnsmasq[324303]: started, version 2.85 cachesize 150 Feb 20 04:59:01 localhost dnsmasq[324303]: DNS service limited to local subnets Feb 20 04:59:01 localhost dnsmasq[324303]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 04:59:01 localhost dnsmasq[324303]: warning: no upstream servers configured Feb 20 04:59:01 localhost dnsmasq-dhcp[324303]: DHCP, static leases only on 10.101.0.0, lease time 1d Feb 20 04:59:01 localhost dnsmasq[324303]: read /var/lib/neutron/dhcp/938f3422-8b96-48d1-9534-9457642bcd81/addn_hosts - 0 addresses Feb 20 04:59:01 localhost dnsmasq-dhcp[324303]: read /var/lib/neutron/dhcp/938f3422-8b96-48d1-9534-9457642bcd81/host Feb 20 04:59:01 localhost dnsmasq-dhcp[324303]: read /var/lib/neutron/dhcp/938f3422-8b96-48d1-9534-9457642bcd81/opts Feb 20 04:59:01 localhost podman[324286]: 2026-02-20 09:59:01.467341864 +0000 UTC m=+0.135501902 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 04:59:01 localhost podman[324286]: 2026-02-20 09:59:01.509939366 +0000 UTC m=+0.178099384 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 04:59:01 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:59:01 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:59:01.570 263745 INFO neutron.agent.dhcp.agent [None req-9d46306c-372b-416b-bc4c-6409c3e74dbc - - - - - -] DHCP configuration for ports {'d365b269-510b-4a4b-a276-5f3b445fbd2f'} is completed#033[00m Feb 20 04:59:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 177 active+clean; 602 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 2.8 MiB/s rd, 37 MiB/s wr, 170 op/s Feb 20 04:59:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:59:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3305751398' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:59:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:59:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3305751398' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:59:02 localhost nova_compute[280804]: 2026-02-20 09:59:02.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:59:02 localhost nova_compute[280804]: 2026-02-20 09:59:02.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 04:59:02 localhost nova_compute[280804]: 2026-02-20 09:59:02.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 04:59:02 localhost nova_compute[280804]: 2026-02-20 09:59:02.685 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 04:59:02 localhost nova_compute[280804]: 2026-02-20 09:59:02.686 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:59:02 localhost nova_compute[280804]: 2026-02-20 09:59:02.686 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:59:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:59:02 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0. Feb 20 04:59:02 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:02.998810) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 04:59:02 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58 Feb 20 04:59:02 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581542998895, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 292, "num_deletes": 256, "total_data_size": 53658, "memory_usage": 60872, "flush_reason": "Manual Compaction"} Feb 20 04:59:02 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581543002258, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 53441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32781, "largest_seqno": 33072, "table_properties": {"data_size": 51528, "index_size": 152, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4865, "raw_average_key_size": 17, "raw_value_size": 47694, "raw_average_value_size": 172, "num_data_blocks": 7, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581540, "oldest_key_time": 1771581540, "file_creation_time": 1771581542, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}} Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 3485 microseconds, and 1275 cpu microseconds. Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.002306) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 53441 bytes OK Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.002330) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.009200) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.009224) EVENT_LOG_v1 {"time_micros": 1771581543009218, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.009261) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 51478, prev total WAL file size 51802, number of live WAL files 2. Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.010955) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303231' seq:72057594037927935, type:22 .. '6C6F676D0034323733' seq:0, type:0; will stop at (end) Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(52KB)], [57(18MB)] Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581543011058, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 19583142, "oldest_snapshot_seqno": -1} Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 13140 keys, 18931226 bytes, temperature: kUnknown Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581543111673, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 18931226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18854018, "index_size": 43251, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32901, "raw_key_size": 352075, "raw_average_key_size": 26, "raw_value_size": 18628327, "raw_average_value_size": 1417, "num_data_blocks": 1636, "num_entries": 13140, "num_filter_entries": 13140, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771581543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}} Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.112082) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 18931226 bytes Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.115487) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.4 rd, 187.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 18.6 +0.0 blob) out(18.1 +0.0 blob), read-write-amplify(720.7) write-amplify(354.2) OK, records in: 13659, records dropped: 519 output_compression: NoCompression Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.115521) EVENT_LOG_v1 {"time_micros": 1771581543115504, "job": 34, "event": "compaction_finished", "compaction_time_micros": 100748, "compaction_time_cpu_micros": 57517, "output_level": 6, "num_output_files": 1, "total_output_size": 18931226, "num_input_records": 13659, "num_output_records": 13140, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581543115723, "job": 34, "event": "table_file_deletion", "file_number": 59} Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581543118273, "job": 34, "event": "table_file_deletion", "file_number": 57} Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.010797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.118310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.118317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.118320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.118323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:59:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-09:59:03.118326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 04:59:03 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:59:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 04:59:03 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:03 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:59:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:59:03 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:03 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:03 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bc980eec-f143-463a-b3c7-dd1de3420de8", "format": "json"}]: dispatch Feb 20 04:59:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bc980eec-f143-463a-b3c7-dd1de3420de8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bc980eec-f143-463a-b3c7-dd1de3420de8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:03 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:59:03.472+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bc980eec-f143-463a-b3c7-dd1de3420de8' of type subvolume Feb 20 04:59:03 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bc980eec-f143-463a-b3c7-dd1de3420de8' of type subvolume Feb 20 04:59:03 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bc980eec-f143-463a-b3c7-dd1de3420de8", "force": true, "format": "json"}]: dispatch Feb 20 04:59:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bc980eec-f143-463a-b3c7-dd1de3420de8, vol_name:cephfs) < "" Feb 20 04:59:03 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bc980eec-f143-463a-b3c7-dd1de3420de8'' moved to trashcan Feb 20 04:59:03 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:59:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bc980eec-f143-463a-b3c7-dd1de3420de8, vol_name:cephfs) < "" Feb 20 04:59:03 localhost nova_compute[280804]: 2026-02-20 09:59:03.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:59:03 localhost nova_compute[280804]: 2026-02-20 09:59:03.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 04:59:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 177 active+clean; 602 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 2.7 MiB/s rd, 18 MiB/s wr, 92 op/s Feb 20 04:59:03 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:03 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:03 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:03 localhost nova_compute[280804]: 2026-02-20 09:59:03.795 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e200 do_prune osdmap full prune enabled Feb 20 04:59:04 localhost nova_compute[280804]: 2026-02-20 09:59:04.507 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:59:04 localhost nova_compute[280804]: 2026-02-20 09:59:04.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:59:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e201 e201: 6 total, 6 up, 6 in Feb 20 04:59:04 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e201: 6 total, 6 up, 6 in Feb 20 04:59:05 localhost nova_compute[280804]: 2026-02-20 09:59:05.473 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:05 localhost dnsmasq[324303]: exiting on receipt of SIGTERM Feb 20 04:59:05 localhost podman[324332]: 2026-02-20 09:59:05.507479944 +0000 UTC m=+0.062970664 container kill eb05783ed3a986d962ea49e14561ab37f6acb50838d37dcc71f650234bf290b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-938f3422-8b96-48d1-9534-9457642bcd81, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 04:59:05 localhost systemd[1]: libpod-eb05783ed3a986d962ea49e14561ab37f6acb50838d37dcc71f650234bf290b9.scope: Deactivated successfully. Feb 20 04:59:05 localhost nova_compute[280804]: 2026-02-20 09:59:05.509 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:59:05 localhost nova_compute[280804]: 2026-02-20 09:59:05.530 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:59:05 localhost nova_compute[280804]: 2026-02-20 09:59:05.530 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:59:05 localhost nova_compute[280804]: 2026-02-20 09:59:05.531 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:59:05 localhost nova_compute[280804]: 2026-02-20 09:59:05.531 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 04:59:05 localhost nova_compute[280804]: 2026-02-20 09:59:05.532 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:59:05 localhost podman[324348]: 2026-02-20 09:59:05.587223564 +0000 UTC m=+0.057795438 container died eb05783ed3a986d962ea49e14561ab37f6acb50838d37dcc71f650234bf290b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-938f3422-8b96-48d1-9534-9457642bcd81, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:59:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb05783ed3a986d962ea49e14561ab37f6acb50838d37dcc71f650234bf290b9-userdata-shm.mount: Deactivated successfully. Feb 20 04:59:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 769 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 5.4 MiB/s rd, 35 MiB/s wr, 174 op/s Feb 20 04:59:05 localhost systemd[1]: var-lib-containers-storage-overlay-924de2053094ad4f3fca0f69cfd3f035725317378636eb69bca17ac2692cc619-merged.mount: Deactivated successfully. Feb 20 04:59:05 localhost ovn_controller[155916]: 2026-02-20T09:59:05Z|00257|binding|INFO|Removing iface tap5871c06f-3c ovn-installed in OVS Feb 20 04:59:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:05.636 161766 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 86008f18-0f3e-49b3-802d-1a863347fa20 with type ""#033[00m Feb 20 04:59:05 localhost ovn_controller[155916]: 2026-02-20T09:59:05Z|00258|binding|INFO|Removing lport 5871c06f-3c3e-4e74-ac99-58a2a68de443 ovn-installed in OVS Feb 20 04:59:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:05.638 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-938f3422-8b96-48d1-9534-9457642bcd81', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-938f3422-8b96-48d1-9534-9457642bcd81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db4c2cde6adc4016a4bb7c41aa8e59c8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4718fadb-b43d-48b2-a22e-2d63028e4364, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5871c06f-3c3e-4e74-ac99-58a2a68de443) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:59:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:05.640 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 5871c06f-3c3e-4e74-ac99-58a2a68de443 in datapath 938f3422-8b96-48d1-9534-9457642bcd81 unbound from our chassis#033[00m Feb 20 04:59:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:05.643 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 938f3422-8b96-48d1-9534-9457642bcd81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 04:59:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:05.644 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[139b235e-ba73-4684-8ef6-0fe858b99930]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:05 localhost nova_compute[280804]: 2026-02-20 09:59:05.647 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:05 localhost podman[324348]: 2026-02-20 09:59:05.657196103 +0000 UTC m=+0.127767917 container remove eb05783ed3a986d962ea49e14561ab37f6acb50838d37dcc71f650234bf290b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-938f3422-8b96-48d1-9534-9457642bcd81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3) Feb 20 04:59:05 localhost nova_compute[280804]: 2026-02-20 09:59:05.670 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:05 localhost kernel: device tap5871c06f-3c left promiscuous mode Feb 20 04:59:05 localhost systemd[1]: libpod-conmon-eb05783ed3a986d962ea49e14561ab37f6acb50838d37dcc71f650234bf290b9.scope: Deactivated successfully. Feb 20 04:59:05 localhost nova_compute[280804]: 2026-02-20 09:59:05.683 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:05 localhost systemd[1]: run-netns-qdhcp\x2d938f3422\x2d8b96\x2d48d1\x2d9534\x2d9457642bcd81.mount: Deactivated successfully. Feb 20 04:59:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:59:05.712 263745 INFO neutron.agent.dhcp.agent [None req-f5d51b05-7c60-4075-abf1-df9991161ba8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:59:05 localhost neutron_dhcp_agent[263741]: 2026-02-20 09:59:05.768 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 04:59:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:05.923 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:59:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:05.924 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:59:05 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:05.924 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:59:05 localhost nova_compute[280804]: 2026-02-20 09:59:05.964 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:59:05 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1770859695' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.011 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.197 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.199 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11440MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.199 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.199 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.249 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.249 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.273 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:59:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9e3e5801-af35-415e-a8cf-27831a052e85", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:59:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9e3e5801-af35-415e-a8cf-27831a052e85, vol_name:cephfs) < "" Feb 20 04:59:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9e3e5801-af35-415e-a8cf-27831a052e85/.meta.tmp' Feb 20 04:59:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9e3e5801-af35-415e-a8cf-27831a052e85/.meta.tmp' to config b'/volumes/_nogroup/9e3e5801-af35-415e-a8cf-27831a052e85/.meta' Feb 20 04:59:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9e3e5801-af35-415e-a8cf-27831a052e85, vol_name:cephfs) < "" Feb 20 04:59:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9e3e5801-af35-415e-a8cf-27831a052e85", "format": "json"}]: dispatch Feb 20 04:59:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9e3e5801-af35-415e-a8cf-27831a052e85, vol_name:cephfs) < "" Feb 20 04:59:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9e3e5801-af35-415e-a8cf-27831a052e85, vol_name:cephfs) < "" Feb 20 04:59:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 04:59:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:06 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:59:06 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/812428714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.742 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.750 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.773 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.775 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 04:59:06 localhost nova_compute[280804]: 2026-02-20 09:59:06.776 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:59:06 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 04:59:06 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:06 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 20 04:59:06 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 04:59:06 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 04:59:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 04:59:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:59:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:59:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v435: 177 pgs: 177 active+clean; 769 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 4.7 MiB/s rd, 31 MiB/s wr, 151 op/s Feb 20 04:59:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:59:08 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:08 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 04:59:08 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 04:59:08 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "29f88fa1-d324-41a4-9d70-2be9681e5831", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:59:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:29f88fa1-d324-41a4-9d70-2be9681e5831, vol_name:cephfs) < "" Feb 20 04:59:08 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/29f88fa1-d324-41a4-9d70-2be9681e5831/.meta.tmp' Feb 20 04:59:08 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/29f88fa1-d324-41a4-9d70-2be9681e5831/.meta.tmp' to config b'/volumes/_nogroup/29f88fa1-d324-41a4-9d70-2be9681e5831/.meta' Feb 20 04:59:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:29f88fa1-d324-41a4-9d70-2be9681e5831, vol_name:cephfs) < "" Feb 20 04:59:08 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "29f88fa1-d324-41a4-9d70-2be9681e5831", "format": "json"}]: dispatch Feb 20 04:59:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:29f88fa1-d324-41a4-9d70-2be9681e5831, vol_name:cephfs) < "" Feb 20 04:59:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:29f88fa1-d324-41a4-9d70-2be9681e5831, vol_name:cephfs) < "" Feb 20 04:59:08 localhost nova_compute[280804]: 2026-02-20 09:59:08.823 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e201 do_prune osdmap full prune enabled Feb 20 04:59:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e202 e202: 6 total, 6 up, 6 in Feb 20 04:59:09 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e202: 6 total, 6 up, 6 in Feb 20 04:59:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:59:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:59:09 localhost systemd[1]: tmp-crun.CeT1fi.mount: Deactivated successfully. Feb 20 04:59:09 localhost podman[324420]: 2026-02-20 09:59:09.474889602 +0000 UTC m=+0.103394159 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 04:59:09 localhost podman[324420]: 2026-02-20 09:59:09.517174776 +0000 UTC m=+0.145679353 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 04:59:09 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:59:09 localhost systemd[1]: tmp-crun.N4jpWq.mount: Deactivated successfully. Feb 20 04:59:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 801 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 2.7 MiB/s rd, 22 MiB/s wr, 91 op/s Feb 20 04:59:09 localhost podman[324419]: 2026-02-20 09:59:09.618335344 +0000 UTC m=+0.250630382 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1770267347, vcs-type=git, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.) Feb 20 04:59:09 localhost podman[324419]: 2026-02-20 09:59:09.65921258 +0000 UTC m=+0.291507558 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, architecture=x86_64, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, version=9.7, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public) Feb 20 04:59:09 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:59:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9e3e5801-af35-415e-a8cf-27831a052e85", "format": "json"}]: dispatch Feb 20 04:59:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9e3e5801-af35-415e-a8cf-27831a052e85, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9e3e5801-af35-415e-a8cf-27831a052e85, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:09 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:59:09.856+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9e3e5801-af35-415e-a8cf-27831a052e85' of type subvolume Feb 20 04:59:09 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9e3e5801-af35-415e-a8cf-27831a052e85' of type subvolume Feb 20 04:59:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9e3e5801-af35-415e-a8cf-27831a052e85", "force": true, "format": "json"}]: dispatch Feb 20 04:59:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9e3e5801-af35-415e-a8cf-27831a052e85, vol_name:cephfs) < "" Feb 20 04:59:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9e3e5801-af35-415e-a8cf-27831a052e85'' moved to trashcan Feb 20 04:59:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:59:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9e3e5801-af35-415e-a8cf-27831a052e85, vol_name:cephfs) < "" Feb 20 04:59:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch Feb 20 04:59:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 04:59:09 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:09 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:59:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e202 do_prune osdmap full prune enabled Feb 20 04:59:10 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e203 e203: 6 total, 6 up, 6 in Feb 20 04:59:10 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e203: 6 total, 6 up, 6 in Feb 20 04:59:10 localhost nova_compute[280804]: 2026-02-20 09:59:10.474 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:10 localhost nova_compute[280804]: 2026-02-20 09:59:10.778 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:59:10 localhost nova_compute[280804]: 2026-02-20 09:59:10.778 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 04:59:11 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:59:11 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 857 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 2.3 MiB/s rd, 27 MiB/s wr, 159 op/s Feb 20 04:59:11 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "93d1cbc6-d6ab-4443-b83a-ea63f0f8018d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:59:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:93d1cbc6-d6ab-4443-b83a-ea63f0f8018d, vol_name:cephfs) < "" Feb 20 04:59:12 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:12 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/93d1cbc6-d6ab-4443-b83a-ea63f0f8018d/.meta.tmp' Feb 20 04:59:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/93d1cbc6-d6ab-4443-b83a-ea63f0f8018d/.meta.tmp' to config b'/volumes/_nogroup/93d1cbc6-d6ab-4443-b83a-ea63f0f8018d/.meta' Feb 20 04:59:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:93d1cbc6-d6ab-4443-b83a-ea63f0f8018d, vol_name:cephfs) < "" Feb 20 04:59:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "93d1cbc6-d6ab-4443-b83a-ea63f0f8018d", "format": "json"}]: dispatch Feb 20 04:59:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:93d1cbc6-d6ab-4443-b83a-ea63f0f8018d, vol_name:cephfs) < "" Feb 20 04:59:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:93d1cbc6-d6ab-4443-b83a-ea63f0f8018d, vol_name:cephfs) < "" Feb 20 04:59:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:59:12 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2887261055' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:59:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:59:12 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2887261055' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:59:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:59:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e203 do_prune osdmap full prune enabled Feb 20 04:59:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:59:13 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2616686135' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:59:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:59:13 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2616686135' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:59:13 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:13.064 2 INFO neutron.agent.securitygroups_rpc [None req-02b2e18c-e6bb-49a4-a8f1-2084c77a3d21 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['aead394c-a7d3-40bc-acee-c30aa527c351']#033[00m Feb 20 04:59:13 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:13.180 2 INFO neutron.agent.securitygroups_rpc [None req-056c38b9-a9d3-4d30-8e17-1a44ab4fc9c9 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['aead394c-a7d3-40bc-acee-c30aa527c351']#033[00m Feb 20 04:59:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e204 e204: 6 total, 6 up, 6 in Feb 20 04:59:13 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e204: 6 total, 6 up, 6 in Feb 20 04:59:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 857 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 93 KiB/s rd, 15 MiB/s wr, 131 op/s Feb 20 04:59:13 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:13.621 2 INFO neutron.agent.securitygroups_rpc [None req-5619c222-5cd3-438e-b875-1e00ee8b5a9d b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']#033[00m Feb 20 04:59:13 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:13.793 2 INFO neutron.agent.securitygroups_rpc [None req-76a11935-2f93-444b-98b0-ed592d92678c b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']#033[00m Feb 20 04:59:13 localhost nova_compute[280804]: 2026-02-20 09:59:13.826 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:13 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:13.954 2 INFO neutron.agent.securitygroups_rpc [None req-92dae041-433a-447b-819c-ca016de78f58 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']#033[00m Feb 20 04:59:14 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:14.379 2 INFO neutron.agent.securitygroups_rpc [None req-2cd7aa12-02dc-45d2-aacd-015fd7ca5faf b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']#033[00m Feb 20 04:59:14 localhost sshd[324457]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:59:14 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:14.499 2 INFO neutron.agent.securitygroups_rpc [None req-8704f1e9-0786-45ef-9124-4ff6c69c9edf b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']#033[00m Feb 20 04:59:14 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:14.620 2 INFO neutron.agent.securitygroups_rpc [None req-82bfdf30-c718-45ba-8302-76a78964efac b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']#033[00m Feb 20 04:59:14 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:14.759 2 INFO neutron.agent.securitygroups_rpc [None req-6f3202e0-dd36-49f7-90de-4aa05c7d3120 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']#033[00m Feb 20 04:59:14 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:14.895 2 INFO neutron.agent.securitygroups_rpc [None req-348b4a4e-d300-45af-acd6-5c08e553ddf3 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']#033[00m Feb 20 04:59:15 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:15.076 2 INFO neutron.agent.securitygroups_rpc [None req-9fbdc85e-428a-4317-ba9b-cd92888d9cb2 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']#033[00m Feb 20 04:59:15 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:15.221 2 INFO neutron.agent.securitygroups_rpc [None req-db51d6cf-e253-422e-b04f-9b8629f57782 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']#033[00m Feb 20 04:59:15 localhost nova_compute[280804]: 2026-02-20 09:59:15.477 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ec89c339-8feb-4567-a8fe-cb38fb00b60f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:59:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec89c339-8feb-4567-a8fe-cb38fb00b60f, vol_name:cephfs) < "" Feb 20 04:59:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 835 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 126 KiB/s rd, 17 MiB/s wr, 184 op/s Feb 20 04:59:15 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:15.856 2 INFO neutron.agent.securitygroups_rpc [None req-af481eae-6194-40c9-88e4-bb7253323390 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['16efbbcf-ddc6-4434-9318-5d841ffddaef']#033[00m Feb 20 04:59:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ec89c339-8feb-4567-a8fe-cb38fb00b60f/.meta.tmp' Feb 20 04:59:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ec89c339-8feb-4567-a8fe-cb38fb00b60f/.meta.tmp' to config b'/volumes/_nogroup/ec89c339-8feb-4567-a8fe-cb38fb00b60f/.meta' Feb 20 04:59:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec89c339-8feb-4567-a8fe-cb38fb00b60f, vol_name:cephfs) < "" Feb 20 04:59:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ec89c339-8feb-4567-a8fe-cb38fb00b60f", "format": "json"}]: dispatch Feb 20 04:59:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec89c339-8feb-4567-a8fe-cb38fb00b60f, vol_name:cephfs) < "" Feb 20 04:59:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec89c339-8feb-4567-a8fe-cb38fb00b60f, vol_name:cephfs) < "" Feb 20 04:59:16 localhost podman[241347]: time="2026-02-20T09:59:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:59:16 localhost podman[241347]: @ - - [20/Feb/2026:09:59:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 04:59:16 localhost podman[241347]: @ - - [20/Feb/2026:09:59:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18786 "" "Go-http-client/1.1" Feb 20 04:59:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 04:59:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:16 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 04:59:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:16 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 20 04:59:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 04:59:16 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 04:59:16 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:16.609 2 INFO neutron.agent.securitygroups_rpc [None req-8397ac5f-4c0e-48b4-864b-bbce3e3a32e8 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['868259ee-6cd3-44fa-b964-b511ba69ce8b']#033[00m Feb 20 04:59:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 04:59:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:59:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:59:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:16 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:16.744 2 INFO neutron.agent.securitygroups_rpc [None req-248bf9b5-6ff0-42de-8583-69a922702068 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['868259ee-6cd3-44fa-b964-b511ba69ce8b']#033[00m Feb 20 04:59:16 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e204 do_prune osdmap full prune enabled Feb 20 04:59:16 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e205 e205: 6 total, 6 up, 6 in Feb 20 04:59:16 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e205: 6 total, 6 up, 6 in Feb 20 04:59:17 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:17 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 04:59:17 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.265 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.266 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.269 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 09:59:17.269 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 04:59:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 835 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 36 KiB/s rd, 3.3 MiB/s wr, 56 op/s Feb 20 04:59:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:59:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e205 do_prune osdmap full prune enabled Feb 20 04:59:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e206 e206: 6 total, 6 up, 6 in Feb 20 04:59:18 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e206: 6 total, 6 up, 6 in Feb 20 04:59:18 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:18.753 2 INFO neutron.agent.securitygroups_rpc [None req-00ebe7d1-26f1-436c-a8d3-18ae30d4ceca b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['350e41a6-6799-4255-abb2-bda7d280e893']#033[00m Feb 20 04:59:18 localhost nova_compute[280804]: 2026-02-20 09:59:18.858 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:18 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:18.940 2 INFO neutron.agent.securitygroups_rpc [None req-ab9099b6-173a-4528-9f76-ddc0c1b400ee b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['350e41a6-6799-4255-abb2-bda7d280e893']#033[00m Feb 20 04:59:18 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "17a556c8-9ef9-44cc-8e06-318686688991", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:59:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:17a556c8-9ef9-44cc-8e06-318686688991, vol_name:cephfs) < "" Feb 20 04:59:19 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/17a556c8-9ef9-44cc-8e06-318686688991/.meta.tmp' Feb 20 04:59:19 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/17a556c8-9ef9-44cc-8e06-318686688991/.meta.tmp' to config b'/volumes/_nogroup/17a556c8-9ef9-44cc-8e06-318686688991/.meta' Feb 20 04:59:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:17a556c8-9ef9-44cc-8e06-318686688991, vol_name:cephfs) < "" Feb 20 04:59:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "17a556c8-9ef9-44cc-8e06-318686688991", "format": "json"}]: dispatch Feb 20 04:59:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:17a556c8-9ef9-44cc-8e06-318686688991, vol_name:cephfs) < "" Feb 20 04:59:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:17a556c8-9ef9-44cc-8e06-318686688991, vol_name:cephfs) < "" Feb 20 04:59:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:59:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:19 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 04:59:19 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:59:19 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice_bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:59:19 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e206 do_prune osdmap full prune enabled Feb 20 04:59:19 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:59:19 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e207 e207: 6 total, 6 up, 6 in Feb 20 04:59:19 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e207: 6 total, 6 up, 6 in Feb 20 04:59:19 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:59:19 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:19 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:19 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:19.471 2 INFO neutron.agent.securitygroups_rpc [None req-483e27bf-9a6d-411a-b87b-b6f37447f4e8 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['0ead8c40-caeb-4988-8a84-cd8090714d6e']#033[00m Feb 20 04:59:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 850 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 46 KiB/s rd, 6.7 MiB/s wr, 73 op/s Feb 20 04:59:19 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:19.686 2 INFO neutron.agent.securitygroups_rpc [None req-ac337b7c-a535-40e3-b3bd-b5580b0e941d b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['0ead8c40-caeb-4988-8a84-cd8090714d6e']#033[00m Feb 20 04:59:19 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:19.824 2 INFO neutron.agent.securitygroups_rpc [None req-c5f864ad-b631-4012-870f-280605d80045 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['0ead8c40-caeb-4988-8a84-cd8090714d6e']#033[00m Feb 20 04:59:19 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:19.992 2 INFO neutron.agent.securitygroups_rpc [None req-ac1e61bd-c67f-4839-a794-1523a2080faa b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['0ead8c40-caeb-4988-8a84-cd8090714d6e']#033[00m Feb 20 04:59:20 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:59:20 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1875354009' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:59:20 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:59:20 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1875354009' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:59:20 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:20.200 2 INFO neutron.agent.securitygroups_rpc [None req-7e4bce2e-fb70-442e-b47b-26c11122b51c b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['0ead8c40-caeb-4988-8a84-cd8090714d6e']#033[00m Feb 20 04:59:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:59:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:59:20 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:20.363 2 INFO neutron.agent.securitygroups_rpc [None req-9c6d9773-e163-46df-a8b7-894bf61ef867 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['0ead8c40-caeb-4988-8a84-cd8090714d6e']#033[00m Feb 20 04:59:20 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:20 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:20 localhost podman[324461]: 2026-02-20 09:59:20.478577233 +0000 UTC m=+0.105848534 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:59:20 localhost nova_compute[280804]: 2026-02-20 09:59:20.501 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:20 localhost podman[324460]: 2026-02-20 09:59:20.538853774 +0000 UTC m=+0.168564400 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Feb 20 04:59:20 localhost podman[324461]: 2026-02-20 09:59:20.559037511 +0000 UTC m=+0.186308802 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 04:59:20 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:59:20 localhost podman[324460]: 2026-02-20 09:59:20.574535383 +0000 UTC m=+0.204245989 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:59:20 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:59:20 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:20.783 2 INFO neutron.agent.securitygroups_rpc [None req-48000ec6-91b4-434a-ab1c-3ae5eaf7b735 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['cefa71e1-4cfe-4451-bb5c-ca133ddcf1fd']#033[00m Feb 20 04:59:21 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e207 do_prune osdmap full prune enabled Feb 20 04:59:21 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e208 e208: 6 total, 6 up, 6 in Feb 20 04:59:21 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e208: 6 total, 6 up, 6 in Feb 20 04:59:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v449: 177 pgs: 177 active+clean; 886 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 111 KiB/s rd, 23 MiB/s wr, 173 op/s Feb 20 04:59:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:59:22 localhost podman[324501]: 2026-02-20 09:59:22.460105873 +0000 UTC m=+0.093669620 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:59:22 localhost podman[324501]: 2026-02-20 09:59:22.473056108 +0000 UTC m=+0.106619845 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 04:59:22 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:59:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e208 do_prune osdmap full prune enabled Feb 20 04:59:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e209 e209: 6 total, 6 up, 6 in Feb 20 04:59:22 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e209: 6 total, 6 up, 6 in Feb 20 04:59:22 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:22.581 2 INFO neutron.agent.securitygroups_rpc [None req-56dcf14a-a69d-4366-adc0-f7e0579b7cd8 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Security group rule updated ['9d889f17-f220-427e-bd61-2fb67b868596']#033[00m Feb 20 04:59:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 04:59:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:22 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:22.700 2 INFO neutron.agent.securitygroups_rpc [None req-87bed7c3-5c32-49ad-acc0-0a3642727263 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Security group rule updated ['9d889f17-f220-427e-bd61-2fb67b868596']#033[00m Feb 20 04:59:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 04:59:22 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:59:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 20 04:59:22 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 04:59:22 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 04:59:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 04:59:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:22 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:59:22 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:59:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "17a556c8-9ef9-44cc-8e06-318686688991", "format": "json"}]: dispatch Feb 20 04:59:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:17a556c8-9ef9-44cc-8e06-318686688991, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:17a556c8-9ef9-44cc-8e06-318686688991, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:22 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '17a556c8-9ef9-44cc-8e06-318686688991' of type subvolume Feb 20 04:59:22 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:59:22.914+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '17a556c8-9ef9-44cc-8e06-318686688991' of type subvolume Feb 20 04:59:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "17a556c8-9ef9-44cc-8e06-318686688991", "force": true, "format": "json"}]: dispatch Feb 20 04:59:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:17a556c8-9ef9-44cc-8e06-318686688991, vol_name:cephfs) < "" Feb 20 04:59:22 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/17a556c8-9ef9-44cc-8e06-318686688991'' moved to trashcan Feb 20 04:59:22 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:59:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:17a556c8-9ef9-44cc-8e06-318686688991, vol_name:cephfs) < "" Feb 20 04:59:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:59:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e209 do_prune osdmap full prune enabled Feb 20 04:59:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e210 e210: 6 total, 6 up, 6 in Feb 20 04:59:23 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e210: 6 total, 6 up, 6 in Feb 20 04:59:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_09:59:23 Feb 20 04:59:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 04:59:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 04:59:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['images', 'vms', 'manila_data', 'volumes', 'backups', '.mgr', 'manila_metadata'] Feb 20 04:59:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 04:59:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:59:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:59:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:59:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:59:23 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:59:23 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 04:59:23 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 04:59:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:59:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:59:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v452: 177 pgs: 177 active+clean; 886 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 121 KiB/s rd, 25 MiB/s wr, 190 op/s Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0024887530825423413 of space, bias 1.0, pg target 0.4969210321476208 quantized to 32 (current 32) Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.04915382758700912 of space, bias 1.0, pg target 9.797996299010483 quantized to 32 (current 32) Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.170892076121348e-05 quantized to 32 (current 32) Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.9084135957565606e-06 of space, bias 1.0, pg target 0.0003619624453284943 quantized to 32 (current 32) Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 04:59:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0005081832774986041 of space, bias 4.0, pg target 0.38554171319560765 quantized to 16 (current 16) Feb 20 04:59:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 04:59:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:59:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 04:59:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 04:59:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:59:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 04:59:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:59:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 04:59:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:59:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 04:59:23 localhost nova_compute[280804]: 2026-02-20 09:59:23.863 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:24 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e210 do_prune osdmap full prune enabled Feb 20 04:59:24 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e211 e211: 6 total, 6 up, 6 in Feb 20 04:59:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e211: 6 total, 6 up, 6 in Feb 20 04:59:24 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2f097eba-f8b3-4922-a9b2-f188b7fb5c09", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:59:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2f097eba-f8b3-4922-a9b2-f188b7fb5c09, vol_name:cephfs) < "" Feb 20 04:59:24 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2f097eba-f8b3-4922-a9b2-f188b7fb5c09/.meta.tmp' Feb 20 04:59:24 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2f097eba-f8b3-4922-a9b2-f188b7fb5c09/.meta.tmp' to config b'/volumes/_nogroup/2f097eba-f8b3-4922-a9b2-f188b7fb5c09/.meta' Feb 20 04:59:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2f097eba-f8b3-4922-a9b2-f188b7fb5c09, vol_name:cephfs) < "" Feb 20 04:59:24 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2f097eba-f8b3-4922-a9b2-f188b7fb5c09", "format": "json"}]: dispatch Feb 20 04:59:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2f097eba-f8b3-4922-a9b2-f188b7fb5c09, vol_name:cephfs) < "" Feb 20 04:59:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2f097eba-f8b3-4922-a9b2-f188b7fb5c09, vol_name:cephfs) < "" Feb 20 04:59:25 localhost nova_compute[280804]: 2026-02-20 09:59:25.504 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 975 MiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 101 KiB/s rd, 31 MiB/s wr, 160 op/s Feb 20 04:59:25 localhost nova_compute[280804]: 2026-02-20 09:59:25.770 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:59:25 localhost nova_compute[280804]: 2026-02-20 09:59:25.771 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:59:25 localhost nova_compute[280804]: 2026-02-20 09:59:25.788 280808 DEBUG nova.compute.manager [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Feb 20 04:59:25 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch Feb 20 04:59:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 04:59:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:59:25 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice_bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:59:25 localhost nova_compute[280804]: 2026-02-20 09:59:25.870 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:59:25 localhost nova_compute[280804]: 2026-02-20 09:59:25.870 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:59:25 localhost nova_compute[280804]: 2026-02-20 09:59:25.877 280808 DEBUG nova.virt.hardware [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Feb 20 04:59:25 localhost nova_compute[280804]: 2026-02-20 09:59:25.877 280808 INFO nova.compute.claims [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Claim successful on node np0005625202.localdomain#033[00m Feb 20 04:59:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:59:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:25 localhost nova_compute[280804]: 2026-02-20 09:59:25.982 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:59:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e211 do_prune osdmap full prune enabled Feb 20 04:59:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e212 e212: 6 total, 6 up, 6 in Feb 20 04:59:26 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e212: 6 total, 6 up, 6 in Feb 20 04:59:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:59:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:26 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ec89c339-8feb-4567-a8fe-cb38fb00b60f", "format": "json"}]: dispatch Feb 20 04:59:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ec89c339-8feb-4567-a8fe-cb38fb00b60f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ec89c339-8feb-4567-a8fe-cb38fb00b60f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:26 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec89c339-8feb-4567-a8fe-cb38fb00b60f' of type subvolume Feb 20 04:59:26 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:59:26.147+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec89c339-8feb-4567-a8fe-cb38fb00b60f' of type subvolume Feb 20 04:59:26 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ec89c339-8feb-4567-a8fe-cb38fb00b60f", "force": true, "format": "json"}]: dispatch Feb 20 04:59:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec89c339-8feb-4567-a8fe-cb38fb00b60f, vol_name:cephfs) < "" Feb 20 04:59:26 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ec89c339-8feb-4567-a8fe-cb38fb00b60f'' moved to trashcan Feb 20 04:59:26 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:59:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec89c339-8feb-4567-a8fe-cb38fb00b60f, vol_name:cephfs) < "" Feb 20 04:59:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 04:59:26 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3674888499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.466 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.471 280808 DEBUG nova.compute.provider_tree [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.485 280808 DEBUG nova.scheduler.client.report [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.503 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.503 280808 DEBUG nova.compute.manager [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.547 280808 DEBUG nova.compute.manager [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.547 280808 DEBUG nova.network.neutron [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.559 280808 INFO nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.576 280808 DEBUG nova.compute.manager [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.659 280808 DEBUG nova.compute.manager [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.662 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.662 280808 INFO nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Creating image(s)#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.804 280808 DEBUG nova.storage.rbd_utils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] rbd image 25d7d566-3a21-4292-a6ad-96dca2d2ec79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.861 280808 DEBUG nova.storage.rbd_utils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] rbd image 25d7d566-3a21-4292-a6ad-96dca2d2ec79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.893 280808 DEBUG nova.storage.rbd_utils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] rbd image 25d7d566-3a21-4292-a6ad-96dca2d2ec79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.897 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.918 280808 DEBUG nova.policy [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ba1a8d771344f0a918e0a8bed2efd06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9fdf2c09b98d48c0bc67cc1c7702a8f4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.972 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.973 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "3692da63af034f7d594aac7c4b8eda10742f09b0" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.974 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "3692da63af034f7d594aac7c4b8eda10742f09b0" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:59:26 localhost nova_compute[280804]: 2026-02-20 09:59:26.974 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "3692da63af034f7d594aac7c4b8eda10742f09b0" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:59:27 localhost nova_compute[280804]: 2026-02-20 09:59:27.004 280808 DEBUG nova.storage.rbd_utils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] rbd image 25d7d566-3a21-4292-a6ad-96dca2d2ec79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:59:27 localhost nova_compute[280804]: 2026-02-20 09:59:27.008 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0 25d7d566-3a21-4292-a6ad-96dca2d2ec79_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:59:27 localhost neutron_sriov_agent[256551]: 2026-02-20 09:59:27.224 2 INFO neutron.agent.securitygroups_rpc [req-f264314a-f5fb-4167-9b9a-7fac156c481a req-f4a185a8-c20a-4c61-b6ac-a21285bd72eb 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Security group member updated ['9d889f17-f220-427e-bd61-2fb67b868596']#033[00m Feb 20 04:59:27 localhost nova_compute[280804]: 2026-02-20 09:59:27.579 280808 DEBUG nova.network.neutron [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Successfully created port: 3cc99a44-cc7e-4f81-bce6-8e63dc92e267 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m Feb 20 04:59:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v456: 177 pgs: 177 active+clean; 975 MiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 82 KiB/s rd, 25 MiB/s wr, 129 op/s Feb 20 04:59:27 localhost nova_compute[280804]: 2026-02-20 09:59:27.629 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0 25d7d566-3a21-4292-a6ad-96dca2d2ec79_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:59:27 localhost nova_compute[280804]: 2026-02-20 09:59:27.727 280808 DEBUG nova.storage.rbd_utils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] resizing rbd image 25d7d566-3a21-4292-a6ad-96dca2d2ec79_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Feb 20 04:59:27 localhost nova_compute[280804]: 2026-02-20 09:59:27.983 280808 DEBUG nova.objects.instance [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lazy-loading 'migration_context' on Instance uuid 25d7d566-3a21-4292-a6ad-96dca2d2ec79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:59:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:59:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e212 do_prune osdmap full prune enabled Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.011 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.011 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Ensure instance console log exists: /var/lib/nova/instances/25d7d566-3a21-4292-a6ad-96dca2d2ec79/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Feb 20 04:59:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e213 e213: 6 total, 6 up, 6 in Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.012 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.013 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.013 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:59:28 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e213: 6 total, 6 up, 6 in Feb 20 04:59:28 localhost openstack_network_exporter[243776]: ERROR 09:59:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:59:28 localhost openstack_network_exporter[243776]: Feb 20 04:59:28 localhost openstack_network_exporter[243776]: ERROR 09:59:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:59:28 localhost openstack_network_exporter[243776]: Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.442 280808 DEBUG nova.network.neutron [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Successfully updated port: 3cc99a44-cc7e-4f81-bce6-8e63dc92e267 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.463 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "refresh_cache-25d7d566-3a21-4292-a6ad-96dca2d2ec79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.463 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquired lock "refresh_cache-25d7d566-3a21-4292-a6ad-96dca2d2ec79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.463 280808 DEBUG nova.network.neutron [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Feb 20 04:59:28 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2f097eba-f8b3-4922-a9b2-f188b7fb5c09", "format": "json"}]: dispatch Feb 20 04:59:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2f097eba-f8b3-4922-a9b2-f188b7fb5c09, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.523 280808 DEBUG nova.network.neutron [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Feb 20 04:59:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2f097eba-f8b3-4922-a9b2-f188b7fb5c09, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:28 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2f097eba-f8b3-4922-a9b2-f188b7fb5c09' of type subvolume Feb 20 04:59:28 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:59:28.549+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2f097eba-f8b3-4922-a9b2-f188b7fb5c09' of type subvolume Feb 20 04:59:28 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2f097eba-f8b3-4922-a9b2-f188b7fb5c09", "force": true, "format": "json"}]: dispatch Feb 20 04:59:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2f097eba-f8b3-4922-a9b2-f188b7fb5c09, vol_name:cephfs) < "" Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.592 280808 DEBUG nova.compute.manager [req-7aa57457-3da5-46ee-9b21-27d091c7664a req-be1cc5d4-9730-4abd-a686-6a47397d179a d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Received event network-changed-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.593 280808 DEBUG nova.compute.manager [req-7aa57457-3da5-46ee-9b21-27d091c7664a req-be1cc5d4-9730-4abd-a686-6a47397d179a d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Refreshing instance network info cache due to event network-changed-3cc99a44-cc7e-4f81-bce6-8e63dc92e267. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.594 280808 DEBUG oslo_concurrency.lockutils [req-7aa57457-3da5-46ee-9b21-27d091c7664a req-be1cc5d4-9730-4abd-a686-6a47397d179a d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "refresh_cache-25d7d566-3a21-4292-a6ad-96dca2d2ec79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:59:28 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2f097eba-f8b3-4922-a9b2-f188b7fb5c09'' moved to trashcan Feb 20 04:59:28 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:59:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2f097eba-f8b3-4922-a9b2-f188b7fb5c09, vol_name:cephfs) < "" Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.855 280808 DEBUG nova.network.neutron [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Updating instance_info_cache with network_info: [{"id": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "address": "fa:16:3e:b4:f9:fa", "network": {"id": "d612a55c-b2aa-4665-bf00-3e649d762c79", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-589446165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9fdf2c09b98d48c0bc67cc1c7702a8f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc99a44-cc", "ovs_interfaceid": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.865 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.875 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Releasing lock "refresh_cache-25d7d566-3a21-4292-a6ad-96dca2d2ec79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.875 280808 DEBUG nova.compute.manager [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Instance network_info: |[{"id": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "address": "fa:16:3e:b4:f9:fa", "network": {"id": "d612a55c-b2aa-4665-bf00-3e649d762c79", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-589446165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9fdf2c09b98d48c0bc67cc1c7702a8f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc99a44-cc", "ovs_interfaceid": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.876 280808 DEBUG oslo_concurrency.lockutils [req-7aa57457-3da5-46ee-9b21-27d091c7664a req-be1cc5d4-9730-4abd-a686-6a47397d179a d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquired lock "refresh_cache-25d7d566-3a21-4292-a6ad-96dca2d2ec79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.877 280808 DEBUG nova.network.neutron [req-7aa57457-3da5-46ee-9b21-27d091c7664a req-be1cc5d4-9730-4abd-a686-6a47397d179a d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Refreshing network info cache for port 3cc99a44-cc7e-4f81-bce6-8e63dc92e267 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.882 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Start _get_guest_xml network_info=[{"id": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "address": "fa:16:3e:b4:f9:fa", "network": {"id": "d612a55c-b2aa-4665-bf00-3e649d762c79", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-589446165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9fdf2c09b98d48c0bc67cc1c7702a8f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc99a44-cc", "ovs_interfaceid": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-20T09:49:57Z,direct_url=,disk_format='qcow2',id=06bd71fd-c415-45d9-b669-46209b7ca2f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='91bce661d685472eb3e7cacab17bf52a',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2026-02-20T09:49:59Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '06bd71fd-c415-45d9-b669-46209b7ca2f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.889 280808 WARNING nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.892 280808 DEBUG nova.virt.libvirt.host [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Searching host: 'np0005625202.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.893 280808 DEBUG nova.virt.libvirt.host [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.895 280808 DEBUG nova.virt.libvirt.host [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Searching host: 'np0005625202.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.896 280808 DEBUG nova.virt.libvirt.host [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.896 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.897 280808 DEBUG nova.virt.hardware [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-20T09:49:56Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='40a6f41a-8891-4900-942e-688a656af142',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-20T09:49:57Z,direct_url=,disk_format='qcow2',id=06bd71fd-c415-45d9-b669-46209b7ca2f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='91bce661d685472eb3e7cacab17bf52a',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2026-02-20T09:49:59Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.898 280808 DEBUG nova.virt.hardware [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.898 280808 DEBUG nova.virt.hardware [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.899 280808 DEBUG nova.virt.hardware [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.899 280808 DEBUG nova.virt.hardware [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.900 280808 DEBUG nova.virt.hardware [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.900 280808 DEBUG nova.virt.hardware [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.901 280808 DEBUG nova.virt.hardware [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.901 280808 DEBUG nova.virt.hardware [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.902 280808 DEBUG nova.virt.hardware [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.902 280808 DEBUG nova.virt.hardware [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Feb 20 04:59:28 localhost nova_compute[280804]: 2026-02-20 09:59:28.907 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:59:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "93d1cbc6-d6ab-4443-b83a-ea63f0f8018d", "format": "json"}]: dispatch Feb 20 04:59:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:93d1cbc6-d6ab-4443-b83a-ea63f0f8018d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:93d1cbc6-d6ab-4443-b83a-ea63f0f8018d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:29 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:59:29.196+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '93d1cbc6-d6ab-4443-b83a-ea63f0f8018d' of type subvolume Feb 20 04:59:29 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '93d1cbc6-d6ab-4443-b83a-ea63f0f8018d' of type subvolume Feb 20 04:59:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "93d1cbc6-d6ab-4443-b83a-ea63f0f8018d", "force": true, "format": "json"}]: dispatch Feb 20 04:59:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:93d1cbc6-d6ab-4443-b83a-ea63f0f8018d, vol_name:cephfs) < "" Feb 20 04:59:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/93d1cbc6-d6ab-4443-b83a-ea63f0f8018d'' moved to trashcan Feb 20 04:59:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:59:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:93d1cbc6-d6ab-4443-b83a-ea63f0f8018d, vol_name:cephfs) < "" Feb 20 04:59:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 04:59:29 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/751048439' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.376 280808 DEBUG nova.network.neutron [req-7aa57457-3da5-46ee-9b21-27d091c7664a req-be1cc5d4-9730-4abd-a686-6a47397d179a d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Updated VIF entry in instance network info cache for port 3cc99a44-cc7e-4f81-bce6-8e63dc92e267. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.377 280808 DEBUG nova.network.neutron [req-7aa57457-3da5-46ee-9b21-27d091c7664a req-be1cc5d4-9730-4abd-a686-6a47397d179a d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Updating instance_info_cache with network_info: [{"id": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "address": "fa:16:3e:b4:f9:fa", "network": {"id": "d612a55c-b2aa-4665-bf00-3e649d762c79", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-589446165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9fdf2c09b98d48c0bc67cc1c7702a8f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc99a44-cc", "ovs_interfaceid": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.379 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.409 280808 DEBUG nova.storage.rbd_utils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] rbd image 25d7d566-3a21-4292-a6ad-96dca2d2ec79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.417 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.441 280808 DEBUG oslo_concurrency.lockutils [req-7aa57457-3da5-46ee-9b21-27d091c7664a req-be1cc5d4-9730-4abd-a686-6a47397d179a d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Releasing lock "refresh_cache-25d7d566-3a21-4292-a6ad-96dca2d2ec79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:59:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 04:59:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 04:59:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:59:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 20 04:59:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 04:59:29 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 04:59:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 983 MiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 96 KiB/s rd, 23 MiB/s wr, 143 op/s Feb 20 04:59:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 04:59:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:59:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:59:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:29 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 04:59:29 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2224840685' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.860 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.862 280808 DEBUG nova.virt.libvirt.vif [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-20T09:59:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1173654775',display_name='tempest-VolumesBackupsTest-instance-1173654775',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005625202.localdomain',hostname='tempest-volumesbackupstest-instance-1173654775',id=11,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMe72HZuIWc3tJD/X0j6gM/UNjaY+DXAi4jpSGVDBcd7BWjTM/ZKsoIkdVrBAeSKOKKSJillg9arx8p4E5NVUhjj/f9aUDVTht6SVx/DyPFCVBF/6pDNRKFf5AIK9I1lpg==',key_name='tempest-keypair-39980210',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005625202.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005625202.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9fdf2c09b98d48c0bc67cc1c7702a8f4',ramdisk_id='',reservation_id='r-psszcajx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-768842871',owner_user_name='tempest-VolumesBackupsTest-768842871-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-20T09:59:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2ba1a8d771344f0a918e0a8bed2efd06',uuid=25d7d566-3a21-4292-a6ad-96dca2d2ec79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "address": "fa:16:3e:b4:f9:fa", "network": {"id": "d612a55c-b2aa-4665-bf00-3e649d762c79", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-589446165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9fdf2c09b98d48c0bc67cc1c7702a8f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc99a44-cc", "ovs_interfaceid": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.862 280808 DEBUG nova.network.os_vif_util [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Converting VIF {"id": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "address": "fa:16:3e:b4:f9:fa", "network": {"id": "d612a55c-b2aa-4665-bf00-3e649d762c79", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-589446165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9fdf2c09b98d48c0bc67cc1c7702a8f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc99a44-cc", "ovs_interfaceid": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.864 280808 DEBUG nova.network.os_vif_util [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:f9:fa,bridge_name='br-int',has_traffic_filtering=True,id=3cc99a44-cc7e-4f81-bce6-8e63dc92e267,network=Network(d612a55c-b2aa-4665-bf00-3e649d762c79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc99a44-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.865 280808 DEBUG nova.objects.instance [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 25d7d566-3a21-4292-a6ad-96dca2d2ec79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.881 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] End _get_guest_xml xml= Feb 20 04:59:29 localhost nova_compute[280804]: 25d7d566-3a21-4292-a6ad-96dca2d2ec79 Feb 20 04:59:29 localhost nova_compute[280804]: instance-0000000b Feb 20 04:59:29 localhost nova_compute[280804]: 131072 Feb 20 04:59:29 localhost nova_compute[280804]: 1 Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: tempest-VolumesBackupsTest-instance-1173654775 Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:28 Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: 128 Feb 20 04:59:29 localhost nova_compute[280804]: 1 Feb 20 04:59:29 localhost nova_compute[280804]: 0 Feb 20 04:59:29 localhost nova_compute[280804]: 0 Feb 20 04:59:29 localhost nova_compute[280804]: 1 Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: tempest-VolumesBackupsTest-768842871-project-member Feb 20 04:59:29 localhost nova_compute[280804]: tempest-VolumesBackupsTest-768842871 Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: RDO Feb 20 04:59:29 localhost nova_compute[280804]: OpenStack Compute Feb 20 04:59:29 localhost nova_compute[280804]: 27.5.2-0.20260127144738.eaa65f0.el9 Feb 20 04:59:29 localhost nova_compute[280804]: 25d7d566-3a21-4292-a6ad-96dca2d2ec79 Feb 20 04:59:29 localhost nova_compute[280804]: 25d7d566-3a21-4292-a6ad-96dca2d2ec79 Feb 20 04:59:29 localhost nova_compute[280804]: Virtual Machine Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: hvm Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: /dev/urandom Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: Feb 20 04:59:29 localhost nova_compute[280804]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.883 280808 DEBUG nova.compute.manager [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Preparing to wait for external event network-vif-plugged-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.883 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.884 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.884 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.885 280808 DEBUG nova.virt.libvirt.vif [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-20T09:59:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1173654775',display_name='tempest-VolumesBackupsTest-instance-1173654775',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005625202.localdomain',hostname='tempest-volumesbackupstest-instance-1173654775',id=11,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMe72HZuIWc3tJD/X0j6gM/UNjaY+DXAi4jpSGVDBcd7BWjTM/ZKsoIkdVrBAeSKOKKSJillg9arx8p4E5NVUhjj/f9aUDVTht6SVx/DyPFCVBF/6pDNRKFf5AIK9I1lpg==',key_name='tempest-keypair-39980210',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005625202.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005625202.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9fdf2c09b98d48c0bc67cc1c7702a8f4',ramdisk_id='',reservation_id='r-psszcajx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-768842871',owner_user_name='tempest-VolumesBackupsTest-768842871-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-20T09:59:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2ba1a8d771344f0a918e0a8bed2efd06',uuid=25d7d566-3a21-4292-a6ad-96dca2d2ec79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "address": "fa:16:3e:b4:f9:fa", "network": {"id": "d612a55c-b2aa-4665-bf00-3e649d762c79", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-589446165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9fdf2c09b98d48c0bc67cc1c7702a8f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc99a44-cc", "ovs_interfaceid": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.886 280808 DEBUG nova.network.os_vif_util [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Converting VIF {"id": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "address": "fa:16:3e:b4:f9:fa", "network": {"id": "d612a55c-b2aa-4665-bf00-3e649d762c79", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-589446165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9fdf2c09b98d48c0bc67cc1c7702a8f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc99a44-cc", "ovs_interfaceid": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.887 280808 DEBUG nova.network.os_vif_util [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:f9:fa,bridge_name='br-int',has_traffic_filtering=True,id=3cc99a44-cc7e-4f81-bce6-8e63dc92e267,network=Network(d612a55c-b2aa-4665-bf00-3e649d762c79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc99a44-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.887 280808 DEBUG os_vif [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:f9:fa,bridge_name='br-int',has_traffic_filtering=True,id=3cc99a44-cc7e-4f81-bce6-8e63dc92e267,network=Network(d612a55c-b2aa-4665-bf00-3e649d762c79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc99a44-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.888 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.889 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.890 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.894 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.894 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cc99a44-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.895 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3cc99a44-cc, col_values=(('external_ids', {'iface-id': '3cc99a44-cc7e-4f81-bce6-8e63dc92e267', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:f9:fa', 'vm-uuid': '25d7d566-3a21-4292-a6ad-96dca2d2ec79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.898 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.900 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.908 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.909 280808 INFO os_vif [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:f9:fa,bridge_name='br-int',has_traffic_filtering=True,id=3cc99a44-cc7e-4f81-bce6-8e63dc92e267,network=Network(d612a55c-b2aa-4665-bf00-3e649d762c79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc99a44-cc')#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.959 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.960 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.960 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] No VIF found with MAC fa:16:3e:b4:f9:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.961 280808 INFO nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Using config drive#033[00m Feb 20 04:59:29 localhost nova_compute[280804]: 2026-02-20 09:59:29.995 280808 DEBUG nova.storage.rbd_utils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] rbd image 25d7d566-3a21-4292-a6ad-96dca2d2ec79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:59:30 localhost nova_compute[280804]: 2026-02-20 09:59:30.096 280808 INFO nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Creating config drive at /var/lib/nova/instances/25d7d566-3a21-4292-a6ad-96dca2d2ec79/disk.config#033[00m Feb 20 04:59:30 localhost nova_compute[280804]: 2026-02-20 09:59:30.103 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/25d7d566-3a21-4292-a6ad-96dca2d2ec79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmm_uqt6s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:59:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:59:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 04:59:30 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 04:59:30 localhost nova_compute[280804]: 2026-02-20 09:59:30.235 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/25d7d566-3a21-4292-a6ad-96dca2d2ec79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmm_uqt6s" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:59:30 localhost nova_compute[280804]: 2026-02-20 09:59:30.275 280808 DEBUG nova.storage.rbd_utils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] rbd image 25d7d566-3a21-4292-a6ad-96dca2d2ec79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Feb 20 04:59:30 localhost nova_compute[280804]: 2026-02-20 09:59:30.280 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/25d7d566-3a21-4292-a6ad-96dca2d2ec79/disk.config 25d7d566-3a21-4292-a6ad-96dca2d2ec79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 04:59:30 localhost nova_compute[280804]: 2026-02-20 09:59:30.506 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:30 localhost nova_compute[280804]: 2026-02-20 09:59:30.609 280808 DEBUG oslo_concurrency.processutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/25d7d566-3a21-4292-a6ad-96dca2d2ec79/disk.config 25d7d566-3a21-4292-a6ad-96dca2d2ec79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 04:59:30 localhost nova_compute[280804]: 2026-02-20 09:59:30.610 280808 INFO nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Deleting local config drive /var/lib/nova/instances/25d7d566-3a21-4292-a6ad-96dca2d2ec79/disk.config because it was imported into RBD.#033[00m Feb 20 04:59:30 localhost systemd[1]: Started libvirt secret daemon. Feb 20 04:59:30 localhost kernel: device tap3cc99a44-cc entered promiscuous mode Feb 20 04:59:30 localhost NetworkManager[5967]: [1771581570.7157] manager: (tap3cc99a44-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/51) Feb 20 04:59:30 localhost nova_compute[280804]: 2026-02-20 09:59:30.716 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:30 localhost ovn_controller[155916]: 2026-02-20T09:59:30Z|00259|binding|INFO|Claiming lport 3cc99a44-cc7e-4f81-bce6-8e63dc92e267 for this chassis. Feb 20 04:59:30 localhost ovn_controller[155916]: 2026-02-20T09:59:30Z|00260|binding|INFO|3cc99a44-cc7e-4f81-bce6-8e63dc92e267: Claiming fa:16:3e:b4:f9:fa 10.100.0.7 Feb 20 04:59:30 localhost systemd-udevd[324864]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.727 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:f9:fa 10.100.0.7'], port_security=['fa:16:3e:b4:f9:fa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '25d7d566-3a21-4292-a6ad-96dca2d2ec79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d612a55c-b2aa-4665-bf00-3e649d762c79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9fdf2c09b98d48c0bc67cc1c7702a8f4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d889f17-f220-427e-bd61-2fb67b868596', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0faef055-9745-4ab8-b295-a6260661d3dc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=3cc99a44-cc7e-4f81-bce6-8e63dc92e267) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.729 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 3cc99a44-cc7e-4f81-bce6-8e63dc92e267 in datapath d612a55c-b2aa-4665-bf00-3e649d762c79 bound to our chassis#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.734 161766 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d612a55c-b2aa-4665-bf00-3e649d762c79#033[00m Feb 20 04:59:30 localhost NetworkManager[5967]: [1771581570.7368] device (tap3cc99a44-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 20 04:59:30 localhost NetworkManager[5967]: [1771581570.7376] device (tap3cc99a44-cc): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Feb 20 04:59:30 localhost ovn_controller[155916]: 2026-02-20T09:59:30Z|00261|binding|INFO|Setting lport 3cc99a44-cc7e-4f81-bce6-8e63dc92e267 up in Southbound Feb 20 04:59:30 localhost ovn_controller[155916]: 2026-02-20T09:59:30Z|00262|binding|INFO|Setting lport 3cc99a44-cc7e-4f81-bce6-8e63dc92e267 ovn-installed in OVS Feb 20 04:59:30 localhost nova_compute[280804]: 2026-02-20 09:59:30.740 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:30 localhost nova_compute[280804]: 2026-02-20 09:59:30.744 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.744 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[7121cd5e-c472-4c50-8a69-a94e0e6fd555]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.745 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd612a55c-b1 in ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.747 263903 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd612a55c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.747 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5e2fed-1517-4add-a179-4f14c390167a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.749 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[f3532d67-d684-4d85-98f1-626291ce5182]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost systemd-machined[205856]: New machine qemu-6-instance-0000000b. Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.759 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[05927511-6a98-4b24-b2c3-741b56f619e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost systemd[1]: Started Virtual Machine qemu-6-instance-0000000b. Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.785 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[ef227129-48b4-4f58-b20f-b3a11c61b31c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.817 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[6e6727f3-19dd-46b6-995c-5713dfbc28ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.824 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[69d25da7-ac17-4ed7-9edc-8dfb30b12801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost systemd-udevd[324867]: Network interface NamePolicy= disabled on kernel command line. Feb 20 04:59:30 localhost NetworkManager[5967]: [1771581570.8279] manager: (tapd612a55c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/52) Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.857 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[f120f3e6-320d-4f67-af91-a41cf9ee87b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.861 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[570c83ce-3a03-4325-ada0-994085c36321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapd612a55c-b1: link becomes ready Feb 20 04:59:30 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapd612a55c-b0: link becomes ready Feb 20 04:59:30 localhost NetworkManager[5967]: [1771581570.8836] device (tapd612a55c-b0): carrier: link connected Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.889 310825 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab05687-81f6-43d1-8691-fe623ef293f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.908 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[a113f714-6b27-4beb-a86e-6bcb5a93b10f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd612a55c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:70:8e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1211706, 'reachable_time': 31181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324900, 'error': None, 'target': 'ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.927 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[c69d990b-e459-47e1-86ed-3b9d2c41fa80]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe70:8e4d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1211706, 'tstamp': 1211706}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324908, 'error': None, 'target': 'ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.946 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[afb92528-04bb-4a6d-a3e4-480184c34acc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd612a55c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:70:8e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1211706, 'reachable_time': 31181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324917, 'error': None, 'target': 'ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:30 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:30.982 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[fed3adbe-90fb-49ae-af3d-71a2cc26aa5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:31.045 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[76c70094-bef2-4dc2-b0e8-7fb1e87329cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:31.047 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd612a55c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:31.047 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:31.048 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd612a55c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:59:31 localhost kernel: device tapd612a55c-b0 entered promiscuous mode Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.096 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.100 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:31.106 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd612a55c-b0, col_values=(('external_ids', {'iface-id': '082dea75-c58b-4458-a355-a40b55af6a87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 04:59:31 localhost ovn_controller[155916]: 2026-02-20T09:59:31Z|00263|binding|INFO|Releasing lport 082dea75-c58b-4458-a355-a40b55af6a87 from this chassis (sb_readonly=0) Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.108 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:31.109 161766 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d612a55c-b2aa-4665-bf00-3e649d762c79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d612a55c-b2aa-4665-bf00-3e649d762c79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:31.111 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[1f136560-2e83-42ed-a8ad-456d899bedf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:31.111 161766 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: global Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: log /dev/log local0 debug Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: log-tag haproxy-metadata-proxy-d612a55c-b2aa-4665-bf00-3e649d762c79 Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: user root Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: group root Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: maxconn 1024 Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: pidfile /var/lib/neutron/external/pids/d612a55c-b2aa-4665-bf00-3e649d762c79.pid.haproxy Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: daemon Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: defaults Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: log global Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: mode http Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: option httplog Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: option dontlognull Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: option http-server-close Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: option forwardfor Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: retries 3 Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: timeout http-request 30s Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: timeout connect 30s Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: timeout client 32s Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: timeout server 32s Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: timeout http-keep-alive 30s Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: listen listener Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: bind 169.254.169.254:80 Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: server metadata /var/lib/neutron/metadata_proxy Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: http-request add-header X-OVN-Network-ID d612a55c-b2aa-4665-bf00-3e649d762c79 Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Feb 20 04:59:31 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:31.112 161766 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79', 'env', 'PROCESS_TAG=haproxy-d612a55c-b2aa-4665-bf00-3e649d762c79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d612a55c-b2aa-4665-bf00-3e649d762c79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.119 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.148 280808 DEBUG nova.compute.manager [req-b15575c2-34f0-42f0-a3a6-3b2720323ed2 req-1161bc0d-ddca-45cb-9cf4-8324af50802c d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Received event network-vif-plugged-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.149 280808 DEBUG oslo_concurrency.lockutils [req-b15575c2-34f0-42f0-a3a6-3b2720323ed2 req-1161bc0d-ddca-45cb-9cf4-8324af50802c d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.149 280808 DEBUG oslo_concurrency.lockutils [req-b15575c2-34f0-42f0-a3a6-3b2720323ed2 req-1161bc0d-ddca-45cb-9cf4-8324af50802c d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.150 280808 DEBUG oslo_concurrency.lockutils [req-b15575c2-34f0-42f0-a3a6-3b2720323ed2 req-1161bc0d-ddca-45cb-9cf4-8324af50802c d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.150 280808 DEBUG nova.compute.manager [req-b15575c2-34f0-42f0-a3a6-3b2720323ed2 req-1161bc0d-ddca-45cb-9cf4-8324af50802c d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Processing event network-vif-plugged-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.221 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.222 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] VM Started (Lifecycle Event)#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.224 280808 DEBUG nova.compute.manager [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.228 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.235 280808 INFO nova.virt.libvirt.driver [-] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Instance spawned successfully.#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.235 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.257 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.268 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.288 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.289 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.290 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.296 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.296 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.297 280808 DEBUG nova.virt.libvirt.driver [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.303 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.304 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.304 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] VM Paused (Lifecycle Event)#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.341 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.344 280808 DEBUG nova.virt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.344 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] VM Resumed (Lifecycle Event)#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.357 280808 INFO nova.compute.manager [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Took 4.70 seconds to spawn the instance on the hypervisor.#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.357 280808 DEBUG nova.compute.manager [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.362 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.367 280808 DEBUG nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.391 280808 INFO nova.compute.manager [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.419 280808 INFO nova.compute.manager [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Took 5.59 seconds to build instance.#033[00m Feb 20 04:59:31 localhost nova_compute[280804]: 2026-02-20 09:59:31.441 280808 DEBUG oslo_concurrency.lockutils [None req-f264314a-f5fb-4167-9b9a-7fac156c481a 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 5.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:59:31 localhost podman[324977]: Feb 20 04:59:31 localhost podman[324977]: 2026-02-20 09:59:31.511515773 +0000 UTC m=+0.090211739 container create 820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Feb 20 04:59:31 localhost systemd[1]: Started libpod-conmon-820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573.scope. Feb 20 04:59:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 04:59:31 localhost podman[324977]: 2026-02-20 09:59:31.472610868 +0000 UTC m=+0.051306804 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Feb 20 04:59:31 localhost systemd[1]: tmp-crun.Jgrwxg.mount: Deactivated successfully. Feb 20 04:59:31 localhost systemd[1]: Started libcrun container. Feb 20 04:59:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52249c3207360cc4eeda2cb4f57894fbd15d6e76153a8562529e7964440a16a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 04:59:31 localhost podman[324977]: 2026-02-20 09:59:31.619877743 +0000 UTC m=+0.198573699 container init 820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127) Feb 20 04:59:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v459: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 157 KiB/s rd, 35 MiB/s wr, 241 op/s Feb 20 04:59:31 localhost podman[324977]: 2026-02-20 09:59:31.635966349 +0000 UTC m=+0.214662315 container start 820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true) Feb 20 04:59:31 localhost neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79[324993]: [NOTICE] (325008) : New worker (325010) forked Feb 20 04:59:31 localhost neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79[324993]: [NOTICE] (325008) : Loading success. Feb 20 04:59:31 localhost podman[324992]: 2026-02-20 09:59:31.716700495 +0000 UTC m=+0.138445510 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 04:59:31 localhost podman[324992]: 2026-02-20 09:59:31.726075635 +0000 UTC m=+0.147820670 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 04:59:31 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 04:59:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "29f88fa1-d324-41a4-9d70-2be9681e5831", "format": "json"}]: dispatch Feb 20 04:59:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:29f88fa1-d324-41a4-9d70-2be9681e5831, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:29f88fa1-d324-41a4-9d70-2be9681e5831, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 04:59:32 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '29f88fa1-d324-41a4-9d70-2be9681e5831' of type subvolume Feb 20 04:59:32 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:59:32.315+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '29f88fa1-d324-41a4-9d70-2be9681e5831' of type subvolume Feb 20 04:59:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "29f88fa1-d324-41a4-9d70-2be9681e5831", "force": true, "format": "json"}]: dispatch Feb 20 04:59:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:29f88fa1-d324-41a4-9d70-2be9681e5831, vol_name:cephfs) < "" Feb 20 04:59:32 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/29f88fa1-d324-41a4-9d70-2be9681e5831'' moved to trashcan Feb 20 04:59:32 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 04:59:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:29f88fa1-d324-41a4-9d70-2be9681e5831, vol_name:cephfs) < "" Feb 20 04:59:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:59:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 04:59:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:59:32 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:59:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:59:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:59:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e213 do_prune osdmap full prune enabled Feb 20 04:59:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e214 e214: 6 total, 6 up, 6 in Feb 20 04:59:33 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e214: 6 total, 6 up, 6 in Feb 20 04:59:33 localhost nova_compute[280804]: 2026-02-20 09:59:33.195 280808 DEBUG nova.compute.manager [req-450b9fe0-633e-4733-b6c3-f29cec379147 req-71fd4f71-e8de-47dc-9457-ade2d995d1cb d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Received event network-vif-plugged-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:59:33 localhost nova_compute[280804]: 2026-02-20 09:59:33.195 280808 DEBUG oslo_concurrency.lockutils [req-450b9fe0-633e-4733-b6c3-f29cec379147 req-71fd4f71-e8de-47dc-9457-ade2d995d1cb d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 04:59:33 localhost nova_compute[280804]: 2026-02-20 09:59:33.196 280808 DEBUG oslo_concurrency.lockutils [req-450b9fe0-633e-4733-b6c3-f29cec379147 req-71fd4f71-e8de-47dc-9457-ade2d995d1cb d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 04:59:33 localhost nova_compute[280804]: 2026-02-20 09:59:33.196 280808 DEBUG oslo_concurrency.lockutils [req-450b9fe0-633e-4733-b6c3-f29cec379147 req-71fd4f71-e8de-47dc-9457-ade2d995d1cb d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 04:59:33 localhost nova_compute[280804]: 2026-02-20 09:59:33.196 280808 DEBUG nova.compute.manager [req-450b9fe0-633e-4733-b6c3-f29cec379147 req-71fd4f71-e8de-47dc-9457-ade2d995d1cb d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] No waiting events found dispatching network-vif-plugged-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 04:59:33 localhost nova_compute[280804]: 2026-02-20 09:59:33.197 280808 WARNING nova.compute.manager [req-450b9fe0-633e-4733-b6c3-f29cec379147 req-71fd4f71-e8de-47dc-9457-ade2d995d1cb d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Received unexpected event network-vif-plugged-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 for instance with vm_state active and task_state None.#033[00m Feb 20 04:59:33 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:59:33 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:33 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 102 KiB/s rd, 18 MiB/s wr, 154 op/s Feb 20 04:59:34 localhost nova_compute[280804]: 2026-02-20 09:59:34.898 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e214 do_prune osdmap full prune enabled Feb 20 04:59:35 localhost nova_compute[280804]: 2026-02-20 09:59:35.246 280808 DEBUG nova.compute.manager [req-8e4f0e5c-78d3-4439-9770-4155645f1591 req-c43974fb-b098-4683-a880-cbf7a5b6db00 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Received event network-changed-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 04:59:35 localhost nova_compute[280804]: 2026-02-20 09:59:35.247 280808 DEBUG nova.compute.manager [req-8e4f0e5c-78d3-4439-9770-4155645f1591 req-c43974fb-b098-4683-a880-cbf7a5b6db00 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Refreshing instance network info cache due to event network-changed-3cc99a44-cc7e-4f81-bce6-8e63dc92e267. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Feb 20 04:59:35 localhost nova_compute[280804]: 2026-02-20 09:59:35.248 280808 DEBUG oslo_concurrency.lockutils [req-8e4f0e5c-78d3-4439-9770-4155645f1591 req-c43974fb-b098-4683-a880-cbf7a5b6db00 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "refresh_cache-25d7d566-3a21-4292-a6ad-96dca2d2ec79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 04:59:35 localhost nova_compute[280804]: 2026-02-20 09:59:35.248 280808 DEBUG oslo_concurrency.lockutils [req-8e4f0e5c-78d3-4439-9770-4155645f1591 req-c43974fb-b098-4683-a880-cbf7a5b6db00 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquired lock "refresh_cache-25d7d566-3a21-4292-a6ad-96dca2d2ec79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 04:59:35 localhost nova_compute[280804]: 2026-02-20 09:59:35.249 280808 DEBUG nova.network.neutron [req-8e4f0e5c-78d3-4439-9770-4155645f1591 req-c43974fb-b098-4683-a880-cbf7a5b6db00 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Refreshing network info cache for port 3cc99a44-cc7e-4f81-bce6-8e63dc92e267 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Feb 20 04:59:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e215 e215: 6 total, 6 up, 6 in Feb 20 04:59:35 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e215: 6 total, 6 up, 6 in Feb 20 04:59:35 localhost nova_compute[280804]: 2026-02-20 09:59:35.552 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v463: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 3.1 MiB/s rd, 36 MiB/s wr, 314 op/s Feb 20 04:59:35 localhost nova_compute[280804]: 2026-02-20 09:59:35.838 280808 DEBUG nova.network.neutron [req-8e4f0e5c-78d3-4439-9770-4155645f1591 req-c43974fb-b098-4683-a880-cbf7a5b6db00 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Updated VIF entry in instance network info cache for port 3cc99a44-cc7e-4f81-bce6-8e63dc92e267. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Feb 20 04:59:35 localhost nova_compute[280804]: 2026-02-20 09:59:35.839 280808 DEBUG nova.network.neutron [req-8e4f0e5c-78d3-4439-9770-4155645f1591 req-c43974fb-b098-4683-a880-cbf7a5b6db00 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Updating instance_info_cache with network_info: [{"id": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "address": "fa:16:3e:b4:f9:fa", "network": {"id": "d612a55c-b2aa-4665-bf00-3e649d762c79", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-589446165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9fdf2c09b98d48c0bc67cc1c7702a8f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc99a44-cc", "ovs_interfaceid": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 04:59:35 localhost nova_compute[280804]: 2026-02-20 09:59:35.869 280808 DEBUG oslo_concurrency.lockutils [req-8e4f0e5c-78d3-4439-9770-4155645f1591 req-c43974fb-b098-4683-a880-cbf7a5b6db00 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Releasing lock "refresh_cache-25d7d566-3a21-4292-a6ad-96dca2d2ec79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 04:59:36 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 04:59:36 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 04:59:36 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:59:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 20 04:59:36 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 04:59:36 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 04:59:36 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:36 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 04:59:36 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:36 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:59:36 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:59:36 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:36 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:59:36 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 04:59:36 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 04:59:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e215 do_prune osdmap full prune enabled Feb 20 04:59:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e216 e216: 6 total, 6 up, 6 in Feb 20 04:59:37 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e216: 6 total, 6 up, 6 in Feb 20 04:59:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v465: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 3.9 MiB/s rd, 23 MiB/s wr, 204 op/s Feb 20 04:59:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:59:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:59:38 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/359541798' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:59:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:59:38 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/359541798' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:59:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:59:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, vol_name:cephfs) < "" Feb 20 04:59:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62/.meta.tmp' Feb 20 04:59:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62/.meta.tmp' to config b'/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62/.meta' Feb 20 04:59:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, vol_name:cephfs) < "" Feb 20 04:59:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "format": "json"}]: dispatch Feb 20 04:59:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, vol_name:cephfs) < "" Feb 20 04:59:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, vol_name:cephfs) < "" Feb 20 04:59:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch Feb 20 04:59:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 04:59:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:59:39 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:59:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:59:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:59:39 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1664593204' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:59:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:59:39 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1664593204' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:59:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:59:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 1.0 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 3.5 MiB/s rd, 21 MiB/s wr, 209 op/s Feb 20 04:59:39 localhost nova_compute[280804]: 2026-02-20 09:59:39.901 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 04:59:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 04:59:40 localhost podman[325030]: 2026-02-20 09:59:40.451434648 +0000 UTC m=+0.083951462 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc.) Feb 20 04:59:40 localhost podman[325031]: 2026-02-20 09:59:40.508241899 +0000 UTC m=+0.138707588 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 20 04:59:40 localhost podman[325030]: 2026-02-20 09:59:40.530744107 +0000 UTC m=+0.163260901 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, release=1770267347, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 20 04:59:40 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 04:59:40 localhost nova_compute[280804]: 2026-02-20 09:59:40.587 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:40 localhost podman[325031]: 2026-02-20 09:59:40.592033145 +0000 UTC m=+0.222498764 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 20 04:59:40 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 04:59:41 localhost sshd[325068]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:59:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v467: 177 pgs: 177 active+clean; 247 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.0 MiB/s rd, 17 MiB/s wr, 274 op/s Feb 20 04:59:42 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 04:59:42 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 04:59:42 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 04:59:42 localhost podman[325087]: 2026-02-20 09:59:42.046064557 +0000 UTC m=+0.052135916 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team) Feb 20 04:59:42 localhost ovn_controller[155916]: 2026-02-20T09:59:42Z|00264|binding|INFO|Releasing lport 082dea75-c58b-4458-a355-a40b55af6a87 from this chassis (sb_readonly=0) Feb 20 04:59:42 localhost nova_compute[280804]: 2026-02-20 09:59:42.318 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:59:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, vol_name:cephfs) < "" Feb 20 04:59:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f/.meta.tmp' Feb 20 04:59:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f/.meta.tmp' to config b'/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f/.meta' Feb 20 04:59:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, vol_name:cephfs) < "" Feb 20 04:59:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "format": "json"}]: dispatch Feb 20 04:59:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, vol_name:cephfs) < "" Feb 20 04:59:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, vol_name:cephfs) < "" Feb 20 04:59:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 04:59:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 04:59:42 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:59:42 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 20 04:59:42 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 04:59:42 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 04:59:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 04:59:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:59:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:59:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:59:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e216 do_prune osdmap full prune enabled Feb 20 04:59:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e217 e217: 6 total, 6 up, 6 in Feb 20 04:59:43 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e217: 6 total, 6 up, 6 in Feb 20 04:59:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 247 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 46 KiB/s wr, 121 op/s Feb 20 04:59:43 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 04:59:43 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 04:59:43 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 04:59:44 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:59:44 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3502069534' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:59:44 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:59:44 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3502069534' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:59:44 localhost nova_compute[280804]: 2026-02-20 09:59:44.904 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:44 localhost ovn_controller[155916]: 2026-02-20T09:59:44Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b4:f9:fa 10.100.0.7 Feb 20 04:59:44 localhost ovn_controller[155916]: 2026-02-20T09:59:44Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:f9:fa 10.100.0.7 Feb 20 04:59:45 localhost nova_compute[280804]: 2026-02-20 09:59:45.590 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 567 KiB/s rd, 3.1 MiB/s wr, 233 op/s Feb 20 04:59:45 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "auth_id": "Joe", "tenant_id": "8fac2513a3ab4162a13f560c6301f671", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:59:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, tenant_id:8fac2513a3ab4162a13f560c6301f671, vol_name:cephfs) < "" Feb 20 04:59:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Feb 20 04:59:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 20 04:59:45 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID Joe with tenant 8fac2513a3ab4162a13f560c6301f671 Feb 20 04:59:46 localhost podman[241347]: time="2026-02-20T09:59:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 04:59:46 localhost podman[241347]: @ - - [20/Feb/2026:09:59:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158903 "" "Go-http-client/1.1" Feb 20 04:59:46 localhost podman[241347]: @ - - [20/Feb/2026:09:59:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19280 "" "Go-http-client/1.1" Feb 20 04:59:46 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 20 04:59:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f/6bdf5028-e9f4-4a0d-812d-96892d0e92d0", "osd", "allow rw pool=manila_data namespace=fsvolumens_52918c2e-6ed5-45c2-9872-88b3bd77010f", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:59:46 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f/6bdf5028-e9f4-4a0d-812d-96892d0e92d0", "osd", "allow rw pool=manila_data namespace=fsvolumens_52918c2e-6ed5-45c2-9872-88b3bd77010f", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:46 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f/6bdf5028-e9f4-4a0d-812d-96892d0e92d0", "osd", "allow rw pool=manila_data namespace=fsvolumens_52918c2e-6ed5-45c2-9872-88b3bd77010f", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, tenant_id:8fac2513a3ab4162a13f560c6301f671, vol_name:cephfs) < "" Feb 20 04:59:46 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:59:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 04:59:46 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:46 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:59:46 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:59:46 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:46 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:59:47 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/257417761' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:59:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:59:47 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/257417761' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:59:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 04:59:47 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 04:59:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 04:59:47 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:59:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 04:59:47 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:59:47 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev e5b3b8c9-db55-4d2c-8d6d-77ce73e59a4d (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:59:47 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev e5b3b8c9-db55-4d2c-8d6d-77ce73e59a4d (Updating node-proxy deployment (+3 -> 3)) Feb 20 04:59:47 localhost ceph-mgr[286565]: [progress INFO root] Completed event e5b3b8c9-db55-4d2c-8d6d-77ce73e59a4d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 04:59:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 04:59:47 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 04:59:47 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f/6bdf5028-e9f4-4a0d-812d-96892d0e92d0", "osd", "allow rw pool=manila_data namespace=fsvolumens_52918c2e-6ed5-45c2-9872-88b3bd77010f", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:47 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f/6bdf5028-e9f4-4a0d-812d-96892d0e92d0", "osd", "allow rw pool=manila_data namespace=fsvolumens_52918c2e-6ed5-45c2-9872-88b3bd77010f", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:47 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:47 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:47 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:47 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 04:59:47 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:59:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:59:47 localhost snmpd[69161]: empty variable list in _query Feb 20 04:59:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v471: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 471 KiB/s rd, 2.6 MiB/s wr, 193 op/s Feb 20 04:59:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:59:47 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 17K writes, 65K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 17K writes, 5744 syncs, 2.97 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 39.97 MB, 0.07 MB/s#012Interval WAL: 11K writes, 5036 syncs, 2.35 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 04:59:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:59:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 04:59:48 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3243055867' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 04:59:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 04:59:48 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3243055867' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 04:59:48 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 04:59:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 04:59:48 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:59:49 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 04:59:49 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 04:59:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:49 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 04:59:49 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:49 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 20 04:59:49 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 04:59:49 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 04:59:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:49 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 04:59:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:49 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:59:49 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:59:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:49 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:59:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 04:59:49 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/.meta.tmp' Feb 20 04:59:49 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/.meta.tmp' to config b'/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/.meta' Feb 20 04:59:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 04:59:49 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "format": "json"}]: dispatch Feb 20 04:59:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 04:59:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 04:59:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 462 KiB/s rd, 2.6 MiB/s wr, 182 op/s Feb 20 04:59:49 localhost nova_compute[280804]: 2026-02-20 09:59:49.907 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:50 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:50 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 04:59:50 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 04:59:50 localhost nova_compute[280804]: 2026-02-20 09:59:50.592 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 04:59:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 04:59:51 localhost podman[325197]: 2026-02-20 09:59:51.46151922 +0000 UTC m=+0.094535593 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 20 04:59:51 localhost podman[325198]: 2026-02-20 09:59:51.514607081 +0000 UTC m=+0.146531815 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Feb 20 04:59:51 localhost podman[325197]: 2026-02-20 09:59:51.525780388 +0000 UTC m=+0.158796751 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 04:59:51 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 04:59:51 localhost podman[325198]: 2026-02-20 09:59:51.550002582 +0000 UTC m=+0.181927336 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 20 04:59:51 localhost sshd[325240]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:59:51 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 04:59:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v473: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 441 KiB/s rd, 2.6 MiB/s wr, 141 op/s Feb 20 04:59:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 04:59:51 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 17K writes, 65K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s#012Cumulative WAL: 17K writes, 5950 syncs, 3.00 writes per sync, written: 0.04 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 40K keys, 12K commit groups, 1.0 writes per commit group, ingest: 23.97 MB, 0.04 MB/s#012Interval WAL: 12K writes, 5071 syncs, 2.38 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Feb 20 04:59:52 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b9bf80c2-3fd5-47b7-9223-4a556ce7ef61", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:59:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b9bf80c2-3fd5-47b7-9223-4a556ce7ef61, vol_name:cephfs) < "" Feb 20 04:59:52 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b9bf80c2-3fd5-47b7-9223-4a556ce7ef61/.meta.tmp' Feb 20 04:59:52 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b9bf80c2-3fd5-47b7-9223-4a556ce7ef61/.meta.tmp' to config b'/volumes/_nogroup/b9bf80c2-3fd5-47b7-9223-4a556ce7ef61/.meta' Feb 20 04:59:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b9bf80c2-3fd5-47b7-9223-4a556ce7ef61, vol_name:cephfs) < "" Feb 20 04:59:52 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b9bf80c2-3fd5-47b7-9223-4a556ce7ef61", "format": "json"}]: dispatch Feb 20 04:59:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b9bf80c2-3fd5-47b7-9223-4a556ce7ef61, vol_name:cephfs) < "" Feb 20 04:59:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b9bf80c2-3fd5-47b7-9223-4a556ce7ef61, vol_name:cephfs) < "" Feb 20 04:59:52 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch Feb 20 04:59:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 04:59:52 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:52 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:59:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:59:52 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:52 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:59:53 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "auth_id": "Joe", "tenant_id": "f656f9df86ae4c53b02f471da5bd5ad7", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:59:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, tenant_id:f656f9df86ae4c53b02f471da5bd5ad7, vol_name:cephfs) < "" Feb 20 04:59:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Feb 20 04:59:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 20 04:59:53 localhost ceph-mgr[286565]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use Feb 20 04:59:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, tenant_id:f656f9df86ae4c53b02f471da5bd5ad7, vol_name:cephfs) < "" Feb 20 04:59:53 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T09:59:53.107+0000 7f74524d4640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Feb 20 04:59:53 localhost ceph-mgr[286565]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Feb 20 04:59:53 localhost systemd-journald[48906]: Data hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 75.0 (53724 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Feb 20 04:59:53 localhost systemd-journald[48906]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 20 04:59:53 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 04:59:53 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:53 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:53 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:53 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 20 04:59:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 04:59:53 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Feb 20 04:59:53 localhost podman[325246]: 2026-02-20 09:59:53.457457344 +0000 UTC m=+0.087013163 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 04:59:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:59:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:59:53 localhost podman[325246]: 2026-02-20 09:59:53.473882751 +0000 UTC m=+0.103438580 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 04:59:53 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 04:59:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:59:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:59:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 04:59:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 04:59:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 416 KiB/s rd, 2.5 MiB/s wr, 133 op/s Feb 20 04:59:54 localhost nova_compute[280804]: 2026-02-20 09:59:54.956 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:55 localhost nova_compute[280804]: 2026-02-20 09:59:55.598 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v475: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 398 KiB/s rd, 2.2 MiB/s wr, 161 op/s Feb 20 04:59:55 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 04:59:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 04:59:55 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 20 04:59:55 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 04:59:55 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 04:59:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:55 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 04:59:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:55 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 04:59:55 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:59:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 04:59:55 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "auth_id": "tempest-cephx-id-622295165", "tenant_id": "f656f9df86ae4c53b02f471da5bd5ad7", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:59:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-622295165, format:json, prefix:fs subvolume authorize, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, tenant_id:f656f9df86ae4c53b02f471da5bd5ad7, vol_name:cephfs) < "" Feb 20 04:59:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-622295165", "format": "json"} v 0) Feb 20 04:59:56 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-622295165", "format": "json"} : dispatch Feb 20 04:59:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID tempest-cephx-id-622295165 with tenant f656f9df86ae4c53b02f471da5bd5ad7 Feb 20 04:59:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-622295165", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/17cc3f20-7a80-42d5-908d-fb1e4bba5d82", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:59:56 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-622295165", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/17cc3f20-7a80-42d5-908d-fb1e4bba5d82", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:56 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-622295165", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/17cc3f20-7a80-42d5-908d-fb1e4bba5d82", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:56 localhost ceph-osd[31981]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Feb 20 04:59:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-622295165, format:json, prefix:fs subvolume authorize, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, tenant_id:f656f9df86ae4c53b02f471da5bd5ad7, vol_name:cephfs) < "" Feb 20 04:59:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 04:59:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c382485c-3d04-46af-88c4-8eb360a0c45a, vol_name:cephfs) < "" Feb 20 04:59:56 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 04:59:56 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 04:59:56 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 04:59:56 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-622295165", "format": "json"} : dispatch Feb 20 04:59:56 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-622295165", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/17cc3f20-7a80-42d5-908d-fb1e4bba5d82", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:56 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-622295165", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/17cc3f20-7a80-42d5-908d-fb1e4bba5d82", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:56 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c382485c-3d04-46af-88c4-8eb360a0c45a/.meta.tmp' Feb 20 04:59:56 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c382485c-3d04-46af-88c4-8eb360a0c45a/.meta.tmp' to config b'/volumes/_nogroup/c382485c-3d04-46af-88c4-8eb360a0c45a/.meta' Feb 20 04:59:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c382485c-3d04-46af-88c4-8eb360a0c45a, vol_name:cephfs) < "" Feb 20 04:59:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "format": "json"}]: dispatch Feb 20 04:59:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c382485c-3d04-46af-88c4-8eb360a0c45a, vol_name:cephfs) < "" Feb 20 04:59:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c382485c-3d04-46af-88c4-8eb360a0c45a, vol_name:cephfs) < "" Feb 20 04:59:56 localhost sshd[325273]: main: sshd: ssh-rsa algorithm is disabled Feb 20 04:59:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v476: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 76 KiB/s wr, 80 op/s Feb 20 04:59:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 04:59:58 localhost openstack_network_exporter[243776]: ERROR 09:59:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 04:59:58 localhost openstack_network_exporter[243776]: Feb 20 04:59:58 localhost openstack_network_exporter[243776]: ERROR 09:59:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 04:59:58 localhost openstack_network_exporter[243776]: Feb 20 04:59:58 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 04:59:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 04:59:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:59:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice_bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 04:59:58 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:58.948 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 04:59:58 localhost ovn_metadata_agent[161761]: 2026-02-20 09:59:58.950 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 04:59:58 localhost nova_compute[280804]: 2026-02-20 09:59:58.949 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 04:59:59 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 04:59:59 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:59 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 04:59:59 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "auth_id": "Joe", "format": "json"}]: dispatch Feb 20 04:59:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 04:59:59 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 04:59:59 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 04:59:59 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 04:59:59 localhost ceph-mgr[286565]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume '2e5c48d9-dcb2-469b-91b0-c7f808d95c49' Feb 20 04:59:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 04:59:59 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "auth_id": "Joe", "format": "json"}]: dispatch Feb 20 04:59:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 04:59:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/17cc3f20-7a80-42d5-908d-fb1e4bba5d82 Feb 20 04:59:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 04:59:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 04:59:59 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "snap_name": "25515e00-8c31-4609-a265-84e19f94da1a", "format": "json"}]: dispatch Feb 20 04:59:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:25515e00-8c31-4609-a265-84e19f94da1a, sub_name:c382485c-3d04-46af-88c4-8eb360a0c45a, vol_name:cephfs) < "" Feb 20 04:59:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:25515e00-8c31-4609-a265-84e19f94da1a, sub_name:c382485c-3d04-46af-88c4-8eb360a0c45a, vol_name:cephfs) < "" Feb 20 04:59:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 76 KiB/s wr, 81 op/s Feb 20 04:59:59 localhost nova_compute[280804]: 2026-02-20 09:59:59.959 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:00 localhost ceph-mon[292786]: log_channel(cluster) log [INF] : overall HEALTH_OK Feb 20 05:00:00 localhost ceph-mon[292786]: overall HEALTH_OK Feb 20 05:00:00 localhost nova_compute[280804]: 2026-02-20 10:00:00.598 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:00 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:00:00.946 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:00:00Z, description=, device_id=06de8864-90cd-41d9-8a7d-9e83a5e36d4c, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8fa14738-5172-470b-956f-5976fc473b82, ip_allocation=immediate, mac_address=fa:16:3e:58:d8:85, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3278, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T10:00:00Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 05:00:00 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:00.953 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 05:00:01 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 05:00:01 localhost podman[325291]: 2026-02-20 10:00:01.249152905 +0000 UTC m=+0.104161519 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 20 05:00:01 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:00:01 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:00:01 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f56cf5ff-1882-43b3-8a1d-c3448d693134", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:00:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:f56cf5ff-1882-43b3-8a1d-c3448d693134, vol_name:cephfs) < "" Feb 20 05:00:01 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:00:01.548 263745 INFO neutron.agent.dhcp.agent [None req-f0903e8c-bc86-4a83-adfd-dc1c2911ebf2 - - - - - -] DHCP configuration for ports {'8fa14738-5172-470b-956f-5976fc473b82'} is completed#033[00m Feb 20 05:00:01 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f56cf5ff-1882-43b3-8a1d-c3448d693134/.meta.tmp' Feb 20 05:00:01 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f56cf5ff-1882-43b3-8a1d-c3448d693134/.meta.tmp' to config b'/volumes/_nogroup/f56cf5ff-1882-43b3-8a1d-c3448d693134/.meta' Feb 20 05:00:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:f56cf5ff-1882-43b3-8a1d-c3448d693134, vol_name:cephfs) < "" Feb 20 05:00:01 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f56cf5ff-1882-43b3-8a1d-c3448d693134", "format": "json"}]: dispatch Feb 20 05:00:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f56cf5ff-1882-43b3-8a1d-c3448d693134, vol_name:cephfs) < "" Feb 20 05:00:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f56cf5ff-1882-43b3-8a1d-c3448d693134, vol_name:cephfs) < "" Feb 20 05:00:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 115 KiB/s wr, 83 op/s Feb 20 05:00:01 localhost nova_compute[280804]: 2026-02-20 10:00:01.759 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:02 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 05:00:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 05:00:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 05:00:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 20 05:00:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 05:00:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 05:00:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:02 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 05:00:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 05:00:02 localhost podman[325313]: 2026-02-20 10:00:02.304207144 +0000 UTC m=+0.101415686 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 05:00:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:02 localhost podman[325313]: 2026-02-20 10:00:02.318999017 +0000 UTC m=+0.116207519 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 05:00:02 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 05:00:02 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:02 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 05:00:02 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 05:00:02 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "auth_id": "tempest-cephx-id-622295165", "format": "json"}]: dispatch Feb 20 05:00:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-622295165, format:json, prefix:fs subvolume deauthorize, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 05:00:02 localhost nova_compute[280804]: 2026-02-20 10:00:02.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:00:02 localhost nova_compute[280804]: 2026-02-20 10:00:02.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 05:00:02 localhost nova_compute[280804]: 2026-02-20 10:00:02.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 05:00:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-622295165", "format": "json"} v 0) Feb 20 05:00:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-622295165", "format": "json"} : dispatch Feb 20 05:00:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-622295165"} v 0) Feb 20 05:00:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-622295165"} : dispatch Feb 20 05:00:02 localhost nova_compute[280804]: 2026-02-20 10:00:02.575 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "refresh_cache-25d7d566-3a21-4292-a6ad-96dca2d2ec79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Feb 20 05:00:02 localhost nova_compute[280804]: 2026-02-20 10:00:02.576 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquired lock "refresh_cache-25d7d566-3a21-4292-a6ad-96dca2d2ec79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Feb 20 05:00:02 localhost nova_compute[280804]: 2026-02-20 10:00:02.576 280808 DEBUG nova.network.neutron [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Feb 20 05:00:02 localhost nova_compute[280804]: 2026-02-20 10:00:02.577 280808 DEBUG nova.objects.instance [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 25d7d566-3a21-4292-a6ad-96dca2d2ec79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 05:00:02 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-622295165"}]': finished Feb 20 05:00:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-622295165, format:json, prefix:fs subvolume deauthorize, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 05:00:02 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "auth_id": "tempest-cephx-id-622295165", "format": "json"}]: dispatch Feb 20 05:00:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-622295165, format:json, prefix:fs subvolume evict, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 05:00:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-622295165, client_metadata.root=/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/17cc3f20-7a80-42d5-908d-fb1e4bba5d82 Feb 20 05:00:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-622295165, format:json, prefix:fs subvolume evict, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 05:00:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:00:03 localhost nova_compute[280804]: 2026-02-20 10:00:03.049 280808 DEBUG nova.network.neutron [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Updating instance_info_cache with network_info: [{"id": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "address": "fa:16:3e:b4:f9:fa", "network": {"id": "d612a55c-b2aa-4665-bf00-3e649d762c79", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-589446165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9fdf2c09b98d48c0bc67cc1c7702a8f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc99a44-cc", "ovs_interfaceid": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 05:00:03 localhost nova_compute[280804]: 2026-02-20 10:00:03.070 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Releasing lock "refresh_cache-25d7d566-3a21-4292-a6ad-96dca2d2ec79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Feb 20 05:00:03 localhost nova_compute[280804]: 2026-02-20 10:00:03.070 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Feb 20 05:00:03 localhost nova_compute[280804]: 2026-02-20 10:00:03.071 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:00:03 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-622295165", "format": "json"} : dispatch Feb 20 05:00:03 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-622295165"} : dispatch Feb 20 05:00:03 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-622295165"}]': finished Feb 20 05:00:03 localhost sshd[325339]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:00:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 73 KiB/s wr, 48 op/s Feb 20 05:00:04 localhost nova_compute[280804]: 2026-02-20 10:00:04.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:00:04 localhost nova_compute[280804]: 2026-02-20 10:00:04.962 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "34a9640c-9d47-4ee3-b8f2-22b14cf6ac88", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:34a9640c-9d47-4ee3-b8f2-22b14cf6ac88, vol_name:cephfs) < "" Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/34a9640c-9d47-4ee3-b8f2-22b14cf6ac88/.meta.tmp' Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/34a9640c-9d47-4ee3-b8f2-22b14cf6ac88/.meta.tmp' to config b'/volumes/_nogroup/34a9640c-9d47-4ee3-b8f2-22b14cf6ac88/.meta' Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:34a9640c-9d47-4ee3-b8f2-22b14cf6ac88, vol_name:cephfs) < "" Feb 20 05:00:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "34a9640c-9d47-4ee3-b8f2-22b14cf6ac88", "format": "json"}]: dispatch Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:34a9640c-9d47-4ee3-b8f2-22b14cf6ac88, vol_name:cephfs) < "" Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:34a9640c-9d47-4ee3-b8f2-22b14cf6ac88, vol_name:cephfs) < "" Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.475 280808 DEBUG oslo_concurrency.lockutils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" by "nova.compute.manager.ComputeManager.reserve_block_device_name..do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.476 280808 DEBUG oslo_concurrency.lockutils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name..do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.490 280808 DEBUG nova.objects.instance [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lazy-loading 'flavor' on Instance uuid 25d7d566-3a21-4292-a6ad-96dca2d2ec79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.534 280808 INFO nova.virt.libvirt.driver [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Ignoring supplied device name: /dev/vdb#033[00m Feb 20 05:00:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.551 280808 DEBUG oslo_concurrency.lockutils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name..do_reserve" :: held 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:00:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 05:00:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:05 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice_bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.602 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:00:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v480: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 104 KiB/s wr, 52 op/s Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.672 280808 DEBUG oslo_concurrency.lockutils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" by "nova.compute.manager.ComputeManager.attach_volume..do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.673 280808 DEBUG oslo_concurrency.lockutils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" acquired by "nova.compute.manager.ComputeManager.attach_volume..do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.673 280808 INFO nova.compute.manager [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Attaching volume 8eeb869d-f1f3-4733-a610-c567aaf12a0c to /dev/vdb#033[00m Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "auth_id": "Joe", "format": "json"}]: dispatch Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, vol_name:cephfs) < "" Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.801 280808 DEBUG os_brick.utils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.106', 'multipath': True, 'enforce_multipath': True, 'host': 'np0005625202.localdomain', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.803 280808 INFO oslo.privsep.daemon [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpklmzn4zi/privsep.sock']#033[00m Feb 20 05:00:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Feb 20 05:00:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 20 05:00:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) Feb 20 05:00:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Feb 20 05:00:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, vol_name:cephfs) < "" Feb 20 05:00:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "auth_id": "Joe", "format": "json"}]: dispatch Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, vol_name:cephfs) < "" Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f/6bdf5028-e9f4-4a0d-812d-96892d0e92d0 Feb 20 05:00:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:05.924 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:00:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:05.924 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:00:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:05.925 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:00:05 localhost nova_compute[280804]: 2026-02-20 10:00:05.929 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, vol_name:cephfs) < "" Feb 20 05:00:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "snap_name": "25515e00-8c31-4609-a265-84e19f94da1a_bb185c5e-7bce-4b96-b50a-2749adcb4cc3", "force": true, "format": "json"}]: dispatch Feb 20 05:00:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:25515e00-8c31-4609-a265-84e19f94da1a_bb185c5e-7bce-4b96-b50a-2749adcb4cc3, sub_name:c382485c-3d04-46af-88c4-8eb360a0c45a, vol_name:cephfs) < "" Feb 20 05:00:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c382485c-3d04-46af-88c4-8eb360a0c45a/.meta.tmp' Feb 20 05:00:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c382485c-3d04-46af-88c4-8eb360a0c45a/.meta.tmp' to config b'/volumes/_nogroup/c382485c-3d04-46af-88c4-8eb360a0c45a/.meta' Feb 20 05:00:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:25515e00-8c31-4609-a265-84e19f94da1a_bb185c5e-7bce-4b96-b50a-2749adcb4cc3, sub_name:c382485c-3d04-46af-88c4-8eb360a0c45a, vol_name:cephfs) < "" Feb 20 05:00:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "snap_name": "25515e00-8c31-4609-a265-84e19f94da1a", "force": true, "format": "json"}]: dispatch Feb 20 05:00:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:25515e00-8c31-4609-a265-84e19f94da1a, sub_name:c382485c-3d04-46af-88c4-8eb360a0c45a, vol_name:cephfs) < "" Feb 20 05:00:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c382485c-3d04-46af-88c4-8eb360a0c45a/.meta.tmp' Feb 20 05:00:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c382485c-3d04-46af-88c4-8eb360a0c45a/.meta.tmp' to config b'/volumes/_nogroup/c382485c-3d04-46af-88c4-8eb360a0c45a/.meta' Feb 20 05:00:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:25515e00-8c31-4609-a265-84e19f94da1a, sub_name:c382485c-3d04-46af-88c4-8eb360a0c45a, vol_name:cephfs) < "" Feb 20 05:00:06 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:06 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:06 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:06 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Feb 20 05:00:06 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Feb 20 05:00:06 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.508 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.509 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.556 280808 INFO oslo.privsep.daemon [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Spawned new privsep daemon via rootwrap#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.442 325346 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.447 325346 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.451 325346 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.451 325346 INFO oslo.privsep.daemon [-] privsep daemon running as pid 325346#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.560 325346 DEBUG oslo.privsep.daemon [-] privsep: reply[84aae0e0-35d7-46b3-943a-4c5c0646b48e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.655 325346 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.669 325346 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.670 325346 DEBUG oslo.privsep.daemon [-] privsep: reply[e1914c90-33e9-4976-a7eb-d656047ed0d5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.671 325346 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.679 325346 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.679 325346 DEBUG oslo.privsep.daemon [-] privsep: reply[d52eaca4-ed7a-4859-a2a5-65453c20224e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d07221e4f0\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.682 325346 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.695 325346 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.695 325346 DEBUG oslo.privsep.daemon [-] privsep: reply[710db34a-38ae-4239-8561-fc5b80b7f3b1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.697 325346 DEBUG oslo.privsep.daemon [-] privsep: reply[9349a31d-804e-41df-a8dd-99fd4380ad11]: (4, '61530aa3-6295-40fa-9f19-edfd227b2bca') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.698 280808 DEBUG oslo_concurrency.processutils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.724 280808 DEBUG oslo_concurrency.processutils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.728 280808 DEBUG os_brick.initiator.connectors.lightos [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.729 280808 DEBUG os_brick.initiator.connectors.lightos [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.729 280808 DEBUG os_brick.initiator.connectors.lightos [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:61530aa3-6295-40fa-9f19-edfd227b2bca dsc: get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.730 280808 DEBUG os_brick.utils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] <== get_connector_properties: return (927ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.106', 'host': 'np0005625202.localdomain', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d07221e4f0', 'do_local_attach': False, 'nvme_hostid': '61530aa3-6295-40fa-9f19-edfd227b2bca', 'system uuid': '61530aa3-6295-40fa-9f19-edfd227b2bca', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:61530aa3-6295-40fa-9f19-edfd227b2bca', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m Feb 20 05:00:06 localhost nova_compute[280804]: 2026-02-20 10:00:06.731 280808 DEBUG nova.virt.block_device [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Updating existing volume attachment record: de3d0ef3-bd59-4a88-ad53-923db98c831b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m Feb 20 05:00:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 05:00:07 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1240294107' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.441 280808 DEBUG oslo_concurrency.lockutils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.442 280808 DEBUG oslo_concurrency.lockutils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.443 280808 DEBUG oslo_concurrency.lockutils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.452 280808 DEBUG nova.objects.instance [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lazy-loading 'flavor' on Instance uuid 25d7d566-3a21-4292-a6ad-96dca2d2ec79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.475 280808 DEBUG nova.virt.libvirt.driver [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Attempting to attach volume 8eeb869d-f1f3-4733-a610-c567aaf12a0c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.479 280808 DEBUG nova.virt.libvirt.guest [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] attach device xml: Feb 20 05:00:07 localhost nova_compute[280804]: Feb 20 05:00:07 localhost nova_compute[280804]: Feb 20 05:00:07 localhost nova_compute[280804]: Feb 20 05:00:07 localhost nova_compute[280804]: Feb 20 05:00:07 localhost nova_compute[280804]: Feb 20 05:00:07 localhost nova_compute[280804]: Feb 20 05:00:07 localhost nova_compute[280804]: Feb 20 05:00:07 localhost nova_compute[280804]: Feb 20 05:00:07 localhost nova_compute[280804]: Feb 20 05:00:07 localhost nova_compute[280804]: Feb 20 05:00:07 localhost nova_compute[280804]: 8eeb869d-f1f3-4733-a610-c567aaf12a0c Feb 20 05:00:07 localhost nova_compute[280804]: Feb 20 05:00:07 localhost nova_compute[280804]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.539 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.540 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.540 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.541 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.541 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.620 280808 DEBUG nova.virt.libvirt.driver [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.622 280808 DEBUG nova.virt.libvirt.driver [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.622 280808 DEBUG nova.virt.libvirt.driver [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.622 280808 DEBUG nova.virt.libvirt.driver [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] No VIF found with MAC fa:16:3e:b4:f9:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Feb 20 05:00:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v481: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 70 KiB/s wr, 9 op/s Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.732 280808 DEBUG oslo_concurrency.lockutils [None req-c4bbe14f-1e8c-462d-a639-f8d54428afbf 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" "released" by "nova.compute.manager.ComputeManager.attach_volume..do_attach_volume" :: held 2.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:00:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:00:07 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3814731526' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:00:07 localhost nova_compute[280804]: 2026-02-20 10:00:07.985 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:00:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.059 280808 DEBUG nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.059 280808 DEBUG nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.060 280808 DEBUG nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.223 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.224 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11173MB free_disk=41.70026779174805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.224 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.224 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.290 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Instance 25d7d566-3a21-4292-a6ad-96dca2d2ec79 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.290 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.290 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.348 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:00:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 05:00:08 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/373438564' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 05:00:08 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f56cf5ff-1882-43b3-8a1d-c3448d693134", "format": "json"}]: dispatch Feb 20 05:00:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f56cf5ff-1882-43b3-8a1d-c3448d693134, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f56cf5ff-1882-43b3-8a1d-c3448d693134, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:08 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f56cf5ff-1882-43b3-8a1d-c3448d693134' of type subvolume Feb 20 05:00:08 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:08.586+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f56cf5ff-1882-43b3-8a1d-c3448d693134' of type subvolume Feb 20 05:00:08 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f56cf5ff-1882-43b3-8a1d-c3448d693134", "force": true, "format": "json"}]: dispatch Feb 20 05:00:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f56cf5ff-1882-43b3-8a1d-c3448d693134, vol_name:cephfs) < "" Feb 20 05:00:08 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f56cf5ff-1882-43b3-8a1d-c3448d693134'' moved to trashcan Feb 20 05:00:08 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:00:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f56cf5ff-1882-43b3-8a1d-c3448d693134, vol_name:cephfs) < "" Feb 20 05:00:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e217 do_prune osdmap full prune enabled Feb 20 05:00:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e218 e218: 6 total, 6 up, 6 in Feb 20 05:00:08 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e218: 6 total, 6 up, 6 in Feb 20 05:00:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:00:08 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3827509993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.838 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.846 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.862 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.880 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 05:00:08 localhost nova_compute[280804]: 2026-02-20 10:00:08.881 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:00:08 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 05:00:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 05:00:08 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 20 05:00:08 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 05:00:08 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "auth_id": "admin", "tenant_id": "8fac2513a3ab4162a13f560c6301f671", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, tenant_id:8fac2513a3ab4162a13f560c6301f671, vol_name:cephfs) < "" Feb 20 05:00:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) Feb 20 05:00:09 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, tenant_id:8fac2513a3ab4162a13f560c6301f671, vol_name:cephfs) < "" Feb 20 05:00:09 localhost ceph-mgr[286565]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Feb 20 05:00:09 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:09.127+0000 7f74524d4640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Feb 20 05:00:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "format": "json"}]: dispatch Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c382485c-3d04-46af-88c4-8eb360a0c45a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c382485c-3d04-46af-88c4-8eb360a0c45a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:09 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:09.272+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c382485c-3d04-46af-88c4-8eb360a0c45a' of type subvolume Feb 20 05:00:09 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c382485c-3d04-46af-88c4-8eb360a0c45a' of type subvolume Feb 20 05:00:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "force": true, "format": "json"}]: dispatch Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c382485c-3d04-46af-88c4-8eb360a0c45a, vol_name:cephfs) < "" Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c382485c-3d04-46af-88c4-8eb360a0c45a'' moved to trashcan Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:00:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c382485c-3d04-46af-88c4-8eb360a0c45a, vol_name:cephfs) < "" Feb 20 05:00:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v483: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 85 KiB/s wr, 12 op/s Feb 20 05:00:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e218 do_prune osdmap full prune enabled Feb 20 05:00:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e219 e219: 6 total, 6 up, 6 in Feb 20 05:00:09 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e219: 6 total, 6 up, 6 in Feb 20 05:00:09 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:09 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 05:00:09 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 05:00:09 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Feb 20 05:00:10 localhost nova_compute[280804]: 2026-02-20 10:00:10.005 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:10 localhost nova_compute[280804]: 2026-02-20 10:00:10.606 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:10 localhost nova_compute[280804]: 2026-02-20 10:00:10.882 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:00:10 localhost nova_compute[280804]: 2026-02-20 10:00:10.883 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:00:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 05:00:10 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1439580111' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 05:00:11 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e219 do_prune osdmap full prune enabled Feb 20 05:00:11 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e220 e220: 6 total, 6 up, 6 in Feb 20 05:00:11 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e220: 6 total, 6 up, 6 in Feb 20 05:00:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 05:00:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 05:00:11 localhost systemd[1]: tmp-crun.BDlqH8.mount: Deactivated successfully. Feb 20 05:00:11 localhost podman[325421]: 2026-02-20 10:00:11.47366936 +0000 UTC m=+0.108227786 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git, distribution-scope=public, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., version=9.7, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., config_id=openstack_network_exporter) Feb 20 05:00:11 localhost podman[325421]: 2026-02-20 10:00:11.514818734 +0000 UTC m=+0.149377180 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1770267347, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, version=9.7, vcs-type=git, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter) Feb 20 05:00:11 localhost systemd[1]: tmp-crun.Lu7Sxu.mount: Deactivated successfully. Feb 20 05:00:11 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 05:00:11 localhost podman[325422]: 2026-02-20 10:00:11.534736424 +0000 UTC m=+0.165573312 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Feb 20 05:00:11 localhost podman[325422]: 2026-02-20 10:00:11.544649797 +0000 UTC m=+0.175486635 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute) Feb 20 05:00:11 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 05:00:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 282 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 132 KiB/s wr, 51 op/s Feb 20 05:00:11 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "34a9640c-9d47-4ee3-b8f2-22b14cf6ac88", "format": "json"}]: dispatch Feb 20 05:00:11 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:34a9640c-9d47-4ee3-b8f2-22b14cf6ac88, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:34a9640c-9d47-4ee3-b8f2-22b14cf6ac88, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:12 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:12.017+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '34a9640c-9d47-4ee3-b8f2-22b14cf6ac88' of type subvolume Feb 20 05:00:12 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '34a9640c-9d47-4ee3-b8f2-22b14cf6ac88' of type subvolume Feb 20 05:00:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "34a9640c-9d47-4ee3-b8f2-22b14cf6ac88", "force": true, "format": "json"}]: dispatch Feb 20 05:00:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:34a9640c-9d47-4ee3-b8f2-22b14cf6ac88, vol_name:cephfs) < "" Feb 20 05:00:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/34a9640c-9d47-4ee3-b8f2-22b14cf6ac88'' moved to trashcan Feb 20 05:00:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:00:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:34a9640c-9d47-4ee3-b8f2-22b14cf6ac88, vol_name:cephfs) < "" Feb 20 05:00:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b9bf80c2-3fd5-47b7-9223-4a556ce7ef61", "format": "json"}]: dispatch Feb 20 05:00:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b9bf80c2-3fd5-47b7-9223-4a556ce7ef61, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b9bf80c2-3fd5-47b7-9223-4a556ce7ef61, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:12 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:12.738+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b9bf80c2-3fd5-47b7-9223-4a556ce7ef61' of type subvolume Feb 20 05:00:12 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b9bf80c2-3fd5-47b7-9223-4a556ce7ef61' of type subvolume Feb 20 05:00:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b9bf80c2-3fd5-47b7-9223-4a556ce7ef61", "force": true, "format": "json"}]: dispatch Feb 20 05:00:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b9bf80c2-3fd5-47b7-9223-4a556ce7ef61, vol_name:cephfs) < "" Feb 20 05:00:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Feb 20 05:00:12 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3101837055' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Feb 20 05:00:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b9bf80c2-3fd5-47b7-9223-4a556ce7ef61'' moved to trashcan Feb 20 05:00:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:00:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b9bf80c2-3fd5-47b7-9223-4a556ce7ef61, vol_name:cephfs) < "" Feb 20 05:00:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:00:13 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "auth_id": "david", "tenant_id": "8fac2513a3ab4162a13f560c6301f671", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:00:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, tenant_id:8fac2513a3ab4162a13f560c6301f671, vol_name:cephfs) < "" Feb 20 05:00:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Feb 20 05:00:13 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 20 05:00:13 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID david with tenant 8fac2513a3ab4162a13f560c6301f671 Feb 20 05:00:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62/5d470986-550e-47dc-98aa-07cb4fcc93dc", "osd", "allow rw pool=manila_data namespace=fsvolumens_7f87993f-62fd-4706-b657-9586f12f2a62", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:00:13 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62/5d470986-550e-47dc-98aa-07cb4fcc93dc", "osd", "allow rw pool=manila_data namespace=fsvolumens_7f87993f-62fd-4706-b657-9586f12f2a62", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:13 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62/5d470986-550e-47dc-98aa-07cb4fcc93dc", "osd", "allow rw pool=manila_data namespace=fsvolumens_7f87993f-62fd-4706-b657-9586f12f2a62", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, tenant_id:8fac2513a3ab4162a13f560c6301f671, vol_name:cephfs) < "" Feb 20 05:00:13 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:00:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 05:00:13 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:13 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 05:00:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v487: 177 pgs: 177 active+clean; 282 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 132 KiB/s wr, 51 op/s Feb 20 05:00:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:00:13 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:13 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:14 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e220 do_prune osdmap full prune enabled Feb 20 05:00:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e221 e221: 6 total, 6 up, 6 in Feb 20 05:00:14 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e221: 6 total, 6 up, 6 in Feb 20 05:00:14 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 20 05:00:14 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62/5d470986-550e-47dc-98aa-07cb4fcc93dc", "osd", "allow rw pool=manila_data namespace=fsvolumens_7f87993f-62fd-4706-b657-9586f12f2a62", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:14 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62/5d470986-550e-47dc-98aa-07cb4fcc93dc", "osd", "allow rw pool=manila_data namespace=fsvolumens_7f87993f-62fd-4706-b657-9586f12f2a62", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:14 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:14 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:14 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:15 localhost nova_compute[280804]: 2026-02-20 10:00:15.010 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e221 do_prune osdmap full prune enabled Feb 20 05:00:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e222 e222: 6 total, 6 up, 6 in Feb 20 05:00:15 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e222: 6 total, 6 up, 6 in Feb 20 05:00:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 05:00:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 05:00:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 20 05:00:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 05:00:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 05:00:15 localhost nova_compute[280804]: 2026-02-20 10:00:15.609 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 546 MiB data, 1.9 GiB used, 40 GiB / 42 GiB avail; 152 KiB/s rd, 45 MiB/s wr, 250 op/s Feb 20 05:00:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 05:00:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:15 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 05:00:15 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:00:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, vol_name:cephfs) < "" Feb 20 05:00:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7f44da3d-55b0-4b00-94b4-a7254a3d21a8/.meta.tmp' Feb 20 05:00:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7f44da3d-55b0-4b00-94b4-a7254a3d21a8/.meta.tmp' to config b'/volumes/_nogroup/7f44da3d-55b0-4b00-94b4-a7254a3d21a8/.meta' Feb 20 05:00:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, vol_name:cephfs) < "" Feb 20 05:00:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "format": "json"}]: dispatch Feb 20 05:00:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, vol_name:cephfs) < "" Feb 20 05:00:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, vol_name:cephfs) < "" Feb 20 05:00:16 localhost podman[241347]: time="2026-02-20T10:00:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 05:00:16 localhost podman[241347]: @ - - [20/Feb/2026:10:00:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158903 "" "Go-http-client/1.1" Feb 20 05:00:16 localhost podman[241347]: @ - - [20/Feb/2026:10:00:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19293 "" "Go-http-client/1.1" Feb 20 05:00:16 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:16 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 05:00:16 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 05:00:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e222 do_prune osdmap full prune enabled Feb 20 05:00:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e223 e223: 6 total, 6 up, 6 in Feb 20 05:00:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v492: 177 pgs: 177 active+clean; 546 MiB data, 1.9 GiB used, 40 GiB / 42 GiB avail; 125 KiB/s rd, 44 MiB/s wr, 198 op/s Feb 20 05:00:17 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e223: 6 total, 6 up, 6 in Feb 20 05:00:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:00:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e223 do_prune osdmap full prune enabled Feb 20 05:00:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e224 e224: 6 total, 6 up, 6 in Feb 20 05:00:18 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e224: 6 total, 6 up, 6 in Feb 20 05:00:18 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch Feb 20 05:00:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 05:00:18 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:18 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 05:00:18 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:00:18 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:18 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a251c3bc-737c-4438-9523-36041c19a61e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:00:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a251c3bc-737c-4438-9523-36041c19a61e, vol_name:cephfs) < "" Feb 20 05:00:19 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a251c3bc-737c-4438-9523-36041c19a61e/.meta.tmp' Feb 20 05:00:19 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a251c3bc-737c-4438-9523-36041c19a61e/.meta.tmp' to config b'/volumes/_nogroup/a251c3bc-737c-4438-9523-36041c19a61e/.meta' Feb 20 05:00:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a251c3bc-737c-4438-9523-36041c19a61e, vol_name:cephfs) < "" Feb 20 05:00:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a251c3bc-737c-4438-9523-36041c19a61e", "format": "json"}]: dispatch Feb 20 05:00:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a251c3bc-737c-4438-9523-36041c19a61e, vol_name:cephfs) < "" Feb 20 05:00:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a251c3bc-737c-4438-9523-36041c19a61e, vol_name:cephfs) < "" Feb 20 05:00:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "auth_id": "david", "tenant_id": "f656f9df86ae4c53b02f471da5bd5ad7", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:00:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, tenant_id:f656f9df86ae4c53b02f471da5bd5ad7, vol_name:cephfs) < "" Feb 20 05:00:19 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Feb 20 05:00:19 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 20 05:00:19 localhost ceph-mgr[286565]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use Feb 20 05:00:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, tenant_id:f656f9df86ae4c53b02f471da5bd5ad7, vol_name:cephfs) < "" Feb 20 05:00:19 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:19.477+0000 7f74524d4640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Feb 20 05:00:19 localhost ceph-mgr[286565]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Feb 20 05:00:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 610 MiB data, 2.1 GiB used, 40 GiB / 42 GiB avail; 135 KiB/s rd, 59 MiB/s wr, 218 op/s Feb 20 05:00:19 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:19 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:19 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 20 05:00:20 localhost nova_compute[280804]: 2026-02-20 10:00:20.014 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:20 localhost sshd[325460]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:00:20 localhost nova_compute[280804]: 2026-02-20 10:00:20.613 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:20 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e224 do_prune osdmap full prune enabled Feb 20 05:00:20 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e225 e225: 6 total, 6 up, 6 in Feb 20 05:00:20 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e225: 6 total, 6 up, 6 in Feb 20 05:00:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 935 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 110 KiB/s rd, 65 MiB/s wr, 195 op/s Feb 20 05:00:21 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 05:00:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:21 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e225 do_prune osdmap full prune enabled Feb 20 05:00:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e226 e226: 6 total, 6 up, 6 in Feb 20 05:00:22 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e226: 6 total, 6 up, 6 in Feb 20 05:00:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 05:00:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 05:00:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 05:00:22 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:22 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 20 05:00:22 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 05:00:22 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:22 localhost podman[325462]: 2026-02-20 10:00:22.480780973 +0000 UTC m=+0.096237289 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_id=ovn_controller, io.buildah.version=1.41.3) Feb 20 05:00:22 localhost podman[325463]: 2026-02-20 10:00:22.574459682 +0000 UTC m=+0.191480660 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent) Feb 20 05:00:22 localhost podman[325462]: 2026-02-20 10:00:22.588339986 +0000 UTC m=+0.203796342 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 20 05:00:22 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 05:00:22 localhost podman[325463]: 2026-02-20 10:00:22.609800112 +0000 UTC m=+0.226821080 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 05:00:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:22 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "auth_id": "david", "format": "json"}]: dispatch Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, vol_name:cephfs) < "" Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '7f44da3d-55b0-4b00-94b4-a7254a3d21a8' Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, vol_name:cephfs) < "" Feb 20 05:00:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "auth_id": "david", "format": "json"}]: dispatch Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, vol_name:cephfs) < "" Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/7f44da3d-55b0-4b00-94b4-a7254a3d21a8/cd0b0378-3f17-48fa-acd7-6ae5ebc115a5 Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, vol_name:cephfs) < "" Feb 20 05:00:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a251c3bc-737c-4438-9523-36041c19a61e", "format": "json"}]: dispatch Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a251c3bc-737c-4438-9523-36041c19a61e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a251c3bc-737c-4438-9523-36041c19a61e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:22 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a251c3bc-737c-4438-9523-36041c19a61e' of type subvolume Feb 20 05:00:22 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:22.979+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a251c3bc-737c-4438-9523-36041c19a61e' of type subvolume Feb 20 05:00:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a251c3bc-737c-4438-9523-36041c19a61e", "force": true, "format": "json"}]: dispatch Feb 20 05:00:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a251c3bc-737c-4438-9523-36041c19a61e, vol_name:cephfs) < "" Feb 20 05:00:23 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a251c3bc-737c-4438-9523-36041c19a61e'' moved to trashcan Feb 20 05:00:23 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:00:23 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a251c3bc-737c-4438-9523-36041c19a61e, vol_name:cephfs) < "" Feb 20 05:00:23 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:23 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 05:00:23 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 05:00:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:00:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e226 do_prune osdmap full prune enabled Feb 20 05:00:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e227 e227: 6 total, 6 up, 6 in Feb 20 05:00:23 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e227: 6 total, 6 up, 6 in Feb 20 05:00:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_10:00:23 Feb 20 05:00:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 05:00:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 05:00:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['manila_metadata', 'backups', 'manila_data', 'volumes', 'vms', '.mgr', 'images'] Feb 20 05:00:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 05:00:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:00:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:00:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:00:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:00:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:00:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:00:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 177 active+clean; 935 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 119 KiB/s rd, 70 MiB/s wr, 210 op/s Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0065837542748464544 of space, bias 1.0, pg target 1.316750854969291 quantized to 32 (current 32) Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014885626046901173 of space, bias 1.0, pg target 0.2962239583333333 quantized to 32 (current 32) Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.045693328476875114 of space, bias 1.0, pg target 9.062510147913565 quantized to 32 (current 32) Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 3.271566164154104e-06 of space, bias 1.0, pg target 0.0006194165270798437 quantized to 32 (current 32) Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:00:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0008168010189838079 of space, bias 4.0, pg target 0.6185906383770705 quantized to 16 (current 16) Feb 20 05:00:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 05:00:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 05:00:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 05:00:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 05:00:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 05:00:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 05:00:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 05:00:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 05:00:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 05:00:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 05:00:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 05:00:24 localhost podman[325507]: 2026-02-20 10:00:24.447759834 +0000 UTC m=+0.082702294 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 05:00:24 localhost podman[325507]: 2026-02-20 10:00:24.488797598 +0000 UTC m=+0.123740038 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 05:00:24 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 05:00:25 localhost nova_compute[280804]: 2026-02-20 10:00:25.015 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:25 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:00:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 05:00:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 05:00:25 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 05:00:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e227 do_prune osdmap full prune enabled Feb 20 05:00:25 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 05:00:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e228 e228: 6 total, 6 up, 6 in Feb 20 05:00:25 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e228: 6 total, 6 up, 6 in Feb 20 05:00:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:00:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:25 localhost nova_compute[280804]: 2026-02-20 10:00:25.617 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 1.3 GiB data, 4.2 GiB used, 38 GiB / 42 GiB avail; 156 KiB/s rd, 75 MiB/s wr, 254 op/s Feb 20 05:00:25 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "auth_id": "david", "format": "json"}]: dispatch Feb 20 05:00:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, vol_name:cephfs) < "" Feb 20 05:00:25 localhost ceph-osd[32921]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Feb 20 05:00:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Feb 20 05:00:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 20 05:00:25 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) Feb 20 05:00:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Feb 20 05:00:25 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Feb 20 05:00:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, vol_name:cephfs) < "" Feb 20 05:00:25 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "auth_id": "david", "format": "json"}]: dispatch Feb 20 05:00:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, vol_name:cephfs) < "" Feb 20 05:00:25 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62/5d470986-550e-47dc-98aa-07cb4fcc93dc Feb 20 05:00:25 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, vol_name:cephfs) < "" Feb 20 05:00:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Feb 20 05:00:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Feb 20 05:00:26 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Feb 20 05:00:26 localhost sshd[325531]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:00:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e228 do_prune osdmap full prune enabled Feb 20 05:00:26 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e229 e229: 6 total, 6 up, 6 in Feb 20 05:00:26 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e229: 6 total, 6 up, 6 in Feb 20 05:00:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v503: 177 pgs: 177 active+clean; 1.3 GiB data, 4.2 GiB used, 38 GiB / 42 GiB avail; 137 KiB/s rd, 66 MiB/s wr, 224 op/s Feb 20 05:00:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:00:28 localhost openstack_network_exporter[243776]: ERROR 10:00:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 05:00:28 localhost openstack_network_exporter[243776]: Feb 20 05:00:28 localhost openstack_network_exporter[243776]: ERROR 10:00:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 05:00:28 localhost openstack_network_exporter[243776]: Feb 20 05:00:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e229 do_prune osdmap full prune enabled Feb 20 05:00:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e230 e230: 6 total, 6 up, 6 in Feb 20 05:00:28 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e230: 6 total, 6 up, 6 in Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.505 280808 DEBUG oslo_concurrency.lockutils [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" by "nova.compute.manager.ComputeManager.detach_volume..do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.505 280808 DEBUG oslo_concurrency.lockutils [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" acquired by "nova.compute.manager.ComputeManager.detach_volume..do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.521 280808 INFO nova.compute.manager [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Detaching volume 8eeb869d-f1f3-4733-a610-c567aaf12a0c#033[00m Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.568 280808 INFO nova.virt.block_device [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Attempting to driver detach volume 8eeb869d-f1f3-4733-a610-c567aaf12a0c from mountpoint /dev/vdb#033[00m Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.580 280808 DEBUG nova.virt.libvirt.driver [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Attempting to detach device vdb from instance 25d7d566-3a21-4292-a6ad-96dca2d2ec79 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.581 280808 DEBUG nova.virt.libvirt.guest [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] detach device xml: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: 8eeb869d-f1f3-4733-a610-c567aaf12a0c Feb 20 05:00:28 localhost nova_compute[280804]:
Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.592 280808 INFO nova.virt.libvirt.driver [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Successfully detached device vdb from instance 25d7d566-3a21-4292-a6ad-96dca2d2ec79 from the persistent domain config.#033[00m Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.592 280808 DEBUG nova.virt.libvirt.driver [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 25d7d566-3a21-4292-a6ad-96dca2d2ec79 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.593 280808 DEBUG nova.virt.libvirt.guest [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] detach device xml: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: 8eeb869d-f1f3-4733-a610-c567aaf12a0c Feb 20 05:00:28 localhost nova_compute[280804]:
Feb 20 05:00:28 localhost nova_compute[280804]: Feb 20 05:00:28 localhost nova_compute[280804]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m Feb 20 05:00:28 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 05:00:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.722 280808 DEBUG nova.virt.libvirt.driver [None req-6b95d693-97b7-49f8-a31f-38de48e34810 - - - - - -] Received event virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.726 280808 DEBUG nova.virt.libvirt.driver [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 25d7d566-3a21-4292-a6ad-96dca2d2ec79 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m Feb 20 05:00:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 05:00:28 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 05:00:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 20 05:00:28 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.732 280808 INFO nova.virt.libvirt.driver [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Successfully detached device vdb from instance 25d7d566-3a21-4292-a6ad-96dca2d2ec79 from the live domain config.#033[00m Feb 20 05:00:28 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 05:00:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:28 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 05:00:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:28 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 05:00:28 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.836 280808 DEBUG nova.objects.instance [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lazy-loading 'flavor' on Instance uuid 25d7d566-3a21-4292-a6ad-96dca2d2ec79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 05:00:28 localhost nova_compute[280804]: 2026-02-20 10:00:28.873 280808 DEBUG oslo_concurrency.lockutils [None req-d4231a90-2bf4-4f4a-8933-b89f058d4c32 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" "released" by "nova.compute.manager.ComputeManager.detach_volume..do_detach_volume" :: held 0.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:00:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "format": "json"}]: dispatch Feb 20 05:00:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:29 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7f44da3d-55b0-4b00-94b4-a7254a3d21a8' of type subvolume Feb 20 05:00:29 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:29.101+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7f44da3d-55b0-4b00-94b4-a7254a3d21a8' of type subvolume Feb 20 05:00:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "force": true, "format": "json"}]: dispatch Feb 20 05:00:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, vol_name:cephfs) < "" Feb 20 05:00:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7f44da3d-55b0-4b00-94b4-a7254a3d21a8'' moved to trashcan Feb 20 05:00:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:00:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7f44da3d-55b0-4b00-94b4-a7254a3d21a8, vol_name:cephfs) < "" Feb 20 05:00:29 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 05:00:29 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 05:00:29 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 05:00:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 135 KiB/s rd, 62 MiB/s wr, 234 op/s Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.692 280808 DEBUG oslo_concurrency.lockutils [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.693 280808 DEBUG oslo_concurrency.lockutils [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.693 280808 DEBUG oslo_concurrency.lockutils [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.694 280808 DEBUG oslo_concurrency.lockutils [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.694 280808 DEBUG oslo_concurrency.lockutils [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.696 280808 INFO nova.compute.manager [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Terminating instance#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.697 280808 DEBUG nova.compute.manager [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Feb 20 05:00:29 localhost kernel: device tap3cc99a44-cc left promiscuous mode Feb 20 05:00:29 localhost NetworkManager[5967]: [1771581629.7731] device (tap3cc99a44-cc): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.790 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:29 localhost ovn_controller[155916]: 2026-02-20T10:00:29Z|00265|binding|INFO|Releasing lport 3cc99a44-cc7e-4f81-bce6-8e63dc92e267 from this chassis (sb_readonly=0) Feb 20 05:00:29 localhost ovn_controller[155916]: 2026-02-20T10:00:29Z|00266|binding|INFO|Setting lport 3cc99a44-cc7e-4f81-bce6-8e63dc92e267 down in Southbound Feb 20 05:00:29 localhost ovn_controller[155916]: 2026-02-20T10:00:29Z|00267|binding|INFO|Removing iface tap3cc99a44-cc ovn-installed in OVS Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.794 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:29 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:29.799 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:f9:fa 10.100.0.7'], port_security=['fa:16:3e:b4:f9:fa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '25d7d566-3a21-4292-a6ad-96dca2d2ec79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d612a55c-b2aa-4665-bf00-3e649d762c79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9fdf2c09b98d48c0bc67cc1c7702a8f4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d889f17-f220-427e-bd61-2fb67b868596', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0faef055-9745-4ab8-b295-a6260661d3dc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=3cc99a44-cc7e-4f81-bce6-8e63dc92e267) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 05:00:29 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:29.801 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 3cc99a44-cc7e-4f81-bce6-8e63dc92e267 in datapath d612a55c-b2aa-4665-bf00-3e649d762c79 unbound from our chassis#033[00m Feb 20 05:00:29 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:29.804 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d612a55c-b2aa-4665-bf00-3e649d762c79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 05:00:29 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:29.805 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[afb7ee66-3fe9-4d37-9b36-30fad4059cd1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:29 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:29.806 161766 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79 namespace which is not needed anymore#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.817 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "908a48ac-0297-49f9-ba67-7193ff5e6e97", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:00:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:908a48ac-0297-49f9-ba67-7193ff5e6e97, vol_name:cephfs) < "" Feb 20 05:00:29 localhost systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Deactivated successfully. Feb 20 05:00:29 localhost systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Consumed 14.457s CPU time. Feb 20 05:00:29 localhost systemd-machined[205856]: Machine qemu-6-instance-0000000b terminated. Feb 20 05:00:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/908a48ac-0297-49f9-ba67-7193ff5e6e97/.meta.tmp' Feb 20 05:00:29 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/908a48ac-0297-49f9-ba67-7193ff5e6e97/.meta.tmp' to config b'/volumes/_nogroup/908a48ac-0297-49f9-ba67-7193ff5e6e97/.meta' Feb 20 05:00:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:908a48ac-0297-49f9-ba67-7193ff5e6e97, vol_name:cephfs) < "" Feb 20 05:00:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "908a48ac-0297-49f9-ba67-7193ff5e6e97", "format": "json"}]: dispatch Feb 20 05:00:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:908a48ac-0297-49f9-ba67-7193ff5e6e97, vol_name:cephfs) < "" Feb 20 05:00:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:908a48ac-0297-49f9-ba67-7193ff5e6e97, vol_name:cephfs) < "" Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.940 280808 INFO nova.virt.libvirt.driver [-] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Instance destroyed successfully.#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.941 280808 DEBUG nova.objects.instance [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lazy-loading 'resources' on Instance uuid 25d7d566-3a21-4292-a6ad-96dca2d2ec79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.968 280808 DEBUG nova.virt.libvirt.vif [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-20T09:59:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1173654775',display_name='tempest-VolumesBackupsTest-instance-1173654775',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005625202.localdomain',hostname='tempest-volumesbackupstest-instance-1173654775',id=11,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMe72HZuIWc3tJD/X0j6gM/UNjaY+DXAi4jpSGVDBcd7BWjTM/ZKsoIkdVrBAeSKOKKSJillg9arx8p4E5NVUhjj/f9aUDVTht6SVx/DyPFCVBF/6pDNRKFf5AIK9I1lpg==',key_name='tempest-keypair-39980210',keypairs=,launch_index=0,launched_at=2026-02-20T09:59:31Z,launched_on='np0005625202.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005625202.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='9fdf2c09b98d48c0bc67cc1c7702a8f4',ramdisk_id='',reservation_id='r-psszcajx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-768842871',owner_user_name='tempest-VolumesBackupsTest-768842871-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2026-02-20T09:59:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2ba1a8d771344f0a918e0a8bed2efd06',uuid=25d7d566-3a21-4292-a6ad-96dca2d2ec79,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "address": "fa:16:3e:b4:f9:fa", "network": {"id": "d612a55c-b2aa-4665-bf00-3e649d762c79", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-589446165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9fdf2c09b98d48c0bc67cc1c7702a8f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc99a44-cc", "ovs_interfaceid": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.969 280808 DEBUG nova.network.os_vif_util [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Converting VIF {"id": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "address": "fa:16:3e:b4:f9:fa", "network": {"id": "d612a55c-b2aa-4665-bf00-3e649d762c79", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-589446165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9fdf2c09b98d48c0bc67cc1c7702a8f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc99a44-cc", "ovs_interfaceid": "3cc99a44-cc7e-4f81-bce6-8e63dc92e267", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.970 280808 DEBUG nova.network.os_vif_util [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:f9:fa,bridge_name='br-int',has_traffic_filtering=True,id=3cc99a44-cc7e-4f81-bce6-8e63dc92e267,network=Network(d612a55c-b2aa-4665-bf00-3e649d762c79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc99a44-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.971 280808 DEBUG os_vif [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:f9:fa,bridge_name='br-int',has_traffic_filtering=True,id=3cc99a44-cc7e-4f81-bce6-8e63dc92e267,network=Network(d612a55c-b2aa-4665-bf00-3e649d762c79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc99a44-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.974 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.974 280808 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cc99a44-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.976 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.979 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:29 localhost nova_compute[280804]: 2026-02-20 10:00:29.984 280808 INFO os_vif [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:f9:fa,bridge_name='br-int',has_traffic_filtering=True,id=3cc99a44-cc7e-4f81-bce6-8e63dc92e267,network=Network(d612a55c-b2aa-4665-bf00-3e649d762c79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc99a44-cc')#033[00m Feb 20 05:00:29 localhost neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79[324993]: [NOTICE] (325008) : haproxy version is 2.8.14-c23fe91 Feb 20 05:00:29 localhost neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79[324993]: [NOTICE] (325008) : path to executable is /usr/sbin/haproxy Feb 20 05:00:29 localhost neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79[324993]: [WARNING] (325008) : Exiting Master process... Feb 20 05:00:29 localhost neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79[324993]: [ALERT] (325008) : Current worker (325010) exited with code 143 (Terminated) Feb 20 05:00:29 localhost neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79[324993]: [WARNING] (325008) : All workers exited. Exiting... (0) Feb 20 05:00:29 localhost systemd[1]: libpod-820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573.scope: Deactivated successfully. Feb 20 05:00:30 localhost podman[325563]: 2026-02-20 10:00:30.003665912 +0000 UTC m=+0.075622874 container died 820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 05:00:30 localhost nova_compute[280804]: 2026-02-20 10:00:30.017 280808 DEBUG nova.compute.manager [req-e72bc3b1-a725-4d5d-b51f-737af2a7ed43 req-f27459a8-7b0e-4e8e-8f3e-c29d9494e42d d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Received event network-vif-unplugged-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 05:00:30 localhost nova_compute[280804]: 2026-02-20 10:00:30.018 280808 DEBUG oslo_concurrency.lockutils [req-e72bc3b1-a725-4d5d-b51f-737af2a7ed43 req-f27459a8-7b0e-4e8e-8f3e-c29d9494e42d d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:00:30 localhost nova_compute[280804]: 2026-02-20 10:00:30.018 280808 DEBUG oslo_concurrency.lockutils [req-e72bc3b1-a725-4d5d-b51f-737af2a7ed43 req-f27459a8-7b0e-4e8e-8f3e-c29d9494e42d d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:00:30 localhost nova_compute[280804]: 2026-02-20 10:00:30.019 280808 DEBUG oslo_concurrency.lockutils [req-e72bc3b1-a725-4d5d-b51f-737af2a7ed43 req-f27459a8-7b0e-4e8e-8f3e-c29d9494e42d d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:00:30 localhost nova_compute[280804]: 2026-02-20 10:00:30.019 280808 DEBUG nova.compute.manager [req-e72bc3b1-a725-4d5d-b51f-737af2a7ed43 req-f27459a8-7b0e-4e8e-8f3e-c29d9494e42d d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] No waiting events found dispatching network-vif-unplugged-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 05:00:30 localhost nova_compute[280804]: 2026-02-20 10:00:30.020 280808 DEBUG nova.compute.manager [req-e72bc3b1-a725-4d5d-b51f-737af2a7ed43 req-f27459a8-7b0e-4e8e-8f3e-c29d9494e42d d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Received event network-vif-unplugged-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Feb 20 05:00:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573-userdata-shm.mount: Deactivated successfully. Feb 20 05:00:30 localhost podman[325563]: 2026-02-20 10:00:30.052757812 +0000 UTC m=+0.124714774 container cleanup 820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS) Feb 20 05:00:30 localhost podman[325598]: 2026-02-20 10:00:30.090449176 +0000 UTC m=+0.077538157 container cleanup 820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Feb 20 05:00:30 localhost systemd[1]: libpod-conmon-820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573.scope: Deactivated successfully. Feb 20 05:00:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e230 do_prune osdmap full prune enabled Feb 20 05:00:30 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e231 e231: 6 total, 6 up, 6 in Feb 20 05:00:30 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e231: 6 total, 6 up, 6 in Feb 20 05:00:30 localhost podman[325616]: 2026-02-20 10:00:30.164372733 +0000 UTC m=+0.090258208 container remove 820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0) Feb 20 05:00:30 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:30.170 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[c6badb46-99f7-4a59-be79-90b6c34b00b2]: (4, ('Fri Feb 20 10:00:29 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79 (820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573)\n820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573\nFri Feb 20 10:00:30 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79 (820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573)\n820eef34bee8ae9fa0e11bdf86877cc08ed6da57debb70859cd9f5770801d573\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:30 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:30.172 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[8090e39d-3868-4085-99d5-dade8b6baf6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:30 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:30.174 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd612a55c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 05:00:30 localhost kernel: device tapd612a55c-b0 left promiscuous mode Feb 20 05:00:30 localhost nova_compute[280804]: 2026-02-20 10:00:30.181 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:30 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:30.186 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[c15f20fd-aa0e-4986-bf59-ed435717c86a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:30 localhost nova_compute[280804]: 2026-02-20 10:00:30.195 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:30 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:30.208 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[801bc211-33aa-4fa5-b56e-5ddf16e25f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:30 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:30.210 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[4a47f387-087b-48d6-aa2c-258228c4008a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:30 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:30.226 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[24af0789-8a4e-4e00-9303-6d230a57be65]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1211699, 'reachable_time': 27625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325637, 'error': None, 'target': 'ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:30 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:30.230 161893 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d612a55c-b2aa-4665-bf00-3e649d762c79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Feb 20 05:00:30 localhost ovn_metadata_agent[161761]: 2026-02-20 10:00:30.231 161893 DEBUG oslo.privsep.daemon [-] privsep: reply[7e669938-8123-4ec7-bf6b-a84be0f6ccbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:00:30 localhost nova_compute[280804]: 2026-02-20 10:00:30.621 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:30 localhost systemd[1]: var-lib-containers-storage-overlay-52249c3207360cc4eeda2cb4f57894fbd15d6e76153a8562529e7964440a16a9-merged.mount: Deactivated successfully. Feb 20 05:00:30 localhost systemd[1]: run-netns-ovnmeta\x2dd612a55c\x2db2aa\x2d4665\x2dbf00\x2d3e649d762c79.mount: Deactivated successfully. Feb 20 05:00:31 localhost nova_compute[280804]: 2026-02-20 10:00:31.231 280808 INFO nova.virt.libvirt.driver [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Deleting instance files /var/lib/nova/instances/25d7d566-3a21-4292-a6ad-96dca2d2ec79_del#033[00m Feb 20 05:00:31 localhost nova_compute[280804]: 2026-02-20 10:00:31.232 280808 INFO nova.virt.libvirt.driver [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Deletion of /var/lib/nova/instances/25d7d566-3a21-4292-a6ad-96dca2d2ec79_del complete#033[00m Feb 20 05:00:31 localhost nova_compute[280804]: 2026-02-20 10:00:31.312 280808 INFO nova.compute.manager [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Took 1.61 seconds to destroy the instance on the hypervisor.#033[00m Feb 20 05:00:31 localhost nova_compute[280804]: 2026-02-20 10:00:31.313 280808 DEBUG oslo.service.loopingcall [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Feb 20 05:00:31 localhost nova_compute[280804]: 2026-02-20 10:00:31.313 280808 DEBUG nova.compute.manager [-] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Feb 20 05:00:31 localhost nova_compute[280804]: 2026-02-20 10:00:31.314 280808 DEBUG nova.network.neutron [-] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Feb 20 05:00:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 283 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 94 KiB/s rd, 89 KiB/s wr, 181 op/s Feb 20 05:00:31 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch Feb 20 05:00:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:31 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 05:00:31 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 05:00:31 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 05:00:31 localhost neutron_sriov_agent[256551]: 2026-02-20 10:00:31.868 2 INFO neutron.agent.securitygroups_rpc [req-dff2b32a-81fc-4277-8af6-ada27919a489 req-cd96c648-ee5c-4ce2-9c39-be0fe03b42e9 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Security group member updated ['9d889f17-f220-427e-bd61-2fb67b868596']#033[00m Feb 20 05:00:31 localhost nova_compute[280804]: 2026-02-20 10:00:31.970 280808 DEBUG nova.network.neutron [-] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Feb 20 05:00:31 localhost nova_compute[280804]: 2026-02-20 10:00:31.986 280808 INFO nova.compute.manager [-] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Took 0.67 seconds to deallocate network for instance.#033[00m Feb 20 05:00:31 localhost nova_compute[280804]: 2026-02-20 10:00:31.998 280808 DEBUG nova.compute.manager [req-495f1ab2-9d8a-4365-ba9c-a64dffed286e req-73f3dadc-6ef9-42bf-bc9e-320a349efa3c d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Received event network-vif-deleted-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.037 280808 DEBUG nova.compute.manager [req-71bc4711-3a05-40f9-b4d8-49820405fe6a req-5f69afba-019b-4152-bf5d-32c1f09eb738 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Received event network-vif-plugged-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.037 280808 DEBUG oslo_concurrency.lockutils [req-71bc4711-3a05-40f9-b4d8-49820405fe6a req-5f69afba-019b-4152-bf5d-32c1f09eb738 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.037 280808 DEBUG oslo_concurrency.lockutils [req-71bc4711-3a05-40f9-b4d8-49820405fe6a req-5f69afba-019b-4152-bf5d-32c1f09eb738 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.038 280808 DEBUG oslo_concurrency.lockutils [req-71bc4711-3a05-40f9-b4d8-49820405fe6a req-5f69afba-019b-4152-bf5d-32c1f09eb738 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.038 280808 DEBUG nova.compute.manager [req-71bc4711-3a05-40f9-b4d8-49820405fe6a req-5f69afba-019b-4152-bf5d-32c1f09eb738 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] No waiting events found dispatching network-vif-plugged-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.038 280808 WARNING nova.compute.manager [req-71bc4711-3a05-40f9-b4d8-49820405fe6a req-5f69afba-019b-4152-bf5d-32c1f09eb738 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Received unexpected event network-vif-plugged-3cc99a44-cc7e-4f81-bce6-8e63dc92e267 for instance with vm_state active and task_state deleting.#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.053 280808 DEBUG oslo_concurrency.lockutils [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.053 280808 DEBUG oslo_concurrency.lockutils [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.103 280808 DEBUG oslo_concurrency.processutils [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:00:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:00:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:32 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:32 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 05:00:32 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 05:00:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "format": "json"}]: dispatch Feb 20 05:00:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:32 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2e5c48d9-dcb2-469b-91b0-c7f808d95c49' of type subvolume Feb 20 05:00:32 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:32.434+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2e5c48d9-dcb2-469b-91b0-c7f808d95c49' of type subvolume Feb 20 05:00:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "force": true, "format": "json"}]: dispatch Feb 20 05:00:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 05:00:32 localhost podman[325659]: 2026-02-20 10:00:32.45135117 +0000 UTC m=+0.089323912 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 05:00:32 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49'' moved to trashcan Feb 20 05:00:32 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:00:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2e5c48d9-dcb2-469b-91b0-c7f808d95c49, vol_name:cephfs) < "" Feb 20 05:00:32 localhost podman[325659]: 2026-02-20 10:00:32.462269453 +0000 UTC m=+0.100242175 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 05:00:32 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 05:00:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:00:32 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4233530869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.579 280808 DEBUG oslo_concurrency.processutils [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.586 280808 DEBUG nova.compute.provider_tree [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.601 280808 DEBUG nova.scheduler.client.report [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.622 280808 DEBUG oslo_concurrency.lockutils [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.645 280808 INFO nova.scheduler.client.report [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Deleted allocations for instance 25d7d566-3a21-4292-a6ad-96dca2d2ec79#033[00m Feb 20 05:00:32 localhost nova_compute[280804]: 2026-02-20 10:00:32.708 280808 DEBUG oslo_concurrency.lockutils [None req-dff2b32a-81fc-4277-8af6-ada27919a489 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Lock "25d7d566-3a21-4292-a6ad-96dca2d2ec79" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 3.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:00:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:00:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e231 do_prune osdmap full prune enabled Feb 20 05:00:33 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "908a48ac-0297-49f9-ba67-7193ff5e6e97", "format": "json"}]: dispatch Feb 20 05:00:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:908a48ac-0297-49f9-ba67-7193ff5e6e97, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:908a48ac-0297-49f9-ba67-7193ff5e6e97, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:33 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:33.190+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '908a48ac-0297-49f9-ba67-7193ff5e6e97' of type subvolume Feb 20 05:00:33 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '908a48ac-0297-49f9-ba67-7193ff5e6e97' of type subvolume Feb 20 05:00:33 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "908a48ac-0297-49f9-ba67-7193ff5e6e97", "force": true, "format": "json"}]: dispatch Feb 20 05:00:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:908a48ac-0297-49f9-ba67-7193ff5e6e97, vol_name:cephfs) < "" Feb 20 05:00:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 283 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 83 KiB/s rd, 79 KiB/s wr, 160 op/s Feb 20 05:00:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e232 e232: 6 total, 6 up, 6 in Feb 20 05:00:33 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/908a48ac-0297-49f9-ba67-7193ff5e6e97'' moved to trashcan Feb 20 05:00:33 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e232: 6 total, 6 up, 6 in Feb 20 05:00:33 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:00:33 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:33 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:33.749+0000 7f7453cd7640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:00:33 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:00:33 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:33.749+0000 7f7453cd7640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:00:33 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:00:33 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:33.749+0000 7f7453cd7640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:00:33 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:00:33 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:33.749+0000 7f7453cd7640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:00:33 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:00:33 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:33.749+0000 7f7453cd7640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:00:33 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:00:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:908a48ac-0297-49f9-ba67-7193ff5e6e97, vol_name:cephfs) < "" Feb 20 05:00:34 localhost sshd[325697]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:00:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e232 do_prune osdmap full prune enabled Feb 20 05:00:34 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e233 e233: 6 total, 6 up, 6 in Feb 20 05:00:34 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e51: np0005625202.arwxwo(active, since 12m), standbys: np0005625203.lonygy, np0005625204.exgrzx Feb 20 05:00:34 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e233: 6 total, 6 up, 6 in Feb 20 05:00:34 localhost nova_compute[280804]: 2026-02-20 10:00:34.977 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:35 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 05:00:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Feb 20 05:00:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 05:00:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Feb 20 05:00:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 05:00:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 05:00:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:35 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch Feb 20 05:00:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:35 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 05:00:35 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:35 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "format": "json"}]: dispatch Feb 20 05:00:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:35 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '52918c2e-6ed5-45c2-9872-88b3bd77010f' of type subvolume Feb 20 05:00:35 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:35.509+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '52918c2e-6ed5-45c2-9872-88b3bd77010f' of type subvolume Feb 20 05:00:35 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "force": true, "format": "json"}]: dispatch Feb 20 05:00:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, vol_name:cephfs) < "" Feb 20 05:00:35 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f'' moved to trashcan Feb 20 05:00:35 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:00:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:52918c2e-6ed5-45c2-9872-88b3bd77010f, vol_name:cephfs) < "" Feb 20 05:00:35 localhost nova_compute[280804]: 2026-02-20 10:00:35.623 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 204 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 144 KiB/s rd, 96 KiB/s wr, 246 op/s Feb 20 05:00:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e233 do_prune osdmap full prune enabled Feb 20 05:00:35 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Feb 20 05:00:35 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Feb 20 05:00:35 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Feb 20 05:00:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e234 e234: 6 total, 6 up, 6 in Feb 20 05:00:35 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e234: 6 total, 6 up, 6 in Feb 20 05:00:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e234 do_prune osdmap full prune enabled Feb 20 05:00:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e235 e235: 6 total, 6 up, 6 in Feb 20 05:00:36 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e235: 6 total, 6 up, 6 in Feb 20 05:00:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 204 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 86 KiB/s rd, 109 KiB/s wr, 136 op/s Feb 20 05:00:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:00:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e235 do_prune osdmap full prune enabled Feb 20 05:00:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e236 e236: 6 total, 6 up, 6 in Feb 20 05:00:38 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e236: 6 total, 6 up, 6 in Feb 20 05:00:38 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:00:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 05:00:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:38 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice_bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 05:00:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:00:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:38 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "auth_id": "admin", "format": "json"}]: dispatch Feb 20 05:00:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, vol_name:cephfs) < "" Feb 20 05:00:39 localhost ceph-mgr[286565]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist Feb 20 05:00:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, vol_name:cephfs) < "" Feb 20 05:00:39 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:39.068+0000 7f74524d4640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Feb 20 05:00:39 localhost ceph-mgr[286565]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Feb 20 05:00:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e236 do_prune osdmap full prune enabled Feb 20 05:00:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e237 e237: 6 total, 6 up, 6 in Feb 20 05:00:39 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e237: 6 total, 6 up, 6 in Feb 20 05:00:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "format": "json"}]: dispatch Feb 20 05:00:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7f87993f-62fd-4706-b657-9586f12f2a62, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7f87993f-62fd-4706-b657-9586f12f2a62, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:39 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:39.525+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7f87993f-62fd-4706-b657-9586f12f2a62' of type subvolume Feb 20 05:00:39 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7f87993f-62fd-4706-b657-9586f12f2a62' of type subvolume Feb 20 05:00:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "force": true, "format": "json"}]: dispatch Feb 20 05:00:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, vol_name:cephfs) < "" Feb 20 05:00:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62'' moved to trashcan Feb 20 05:00:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:00:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7f87993f-62fd-4706-b657-9586f12f2a62, vol_name:cephfs) < "" Feb 20 05:00:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v517: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 3.2 KiB/s rd, 94 KiB/s wr, 15 op/s Feb 20 05:00:40 localhost nova_compute[280804]: 2026-02-20 10:00:40.024 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:40 localhost nova_compute[280804]: 2026-02-20 10:00:40.630 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:41 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 05:00:41 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:00:41 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:00:41 localhost nova_compute[280804]: 2026-02-20 10:00:41.419 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:41 localhost podman[325717]: 2026-02-20 10:00:41.420278636 +0000 UTC m=+0.064761453 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 05:00:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v518: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 102 KiB/s rd, 72 KiB/s wr, 144 op/s Feb 20 05:00:41 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 05:00:41 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:41 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 05:00:41 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:41 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 20 05:00:41 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 05:00:41 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 05:00:41 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:41 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 05:00:41 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:41 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 05:00:41 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:41 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:42 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:42 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 05:00:42 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 05:00:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 05:00:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 05:00:42 localhost podman[325741]: 2026-02-20 10:00:42.454254709 +0000 UTC m=+0.089024665 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 20 05:00:42 localhost podman[325740]: 2026-02-20 10:00:42.50187664 +0000 UTC m=+0.140767846 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347) Feb 20 05:00:42 localhost podman[325740]: 2026-02-20 10:00:42.520692736 +0000 UTC m=+0.159583902 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z) Feb 20 05:00:42 localhost podman[325741]: 2026-02-20 10:00:42.521331873 +0000 UTC m=+0.156101839 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 05:00:42 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 05:00:42 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 05:00:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:00:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e237 do_prune osdmap full prune enabled Feb 20 05:00:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e238 e238: 6 total, 6 up, 6 in Feb 20 05:00:43 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e238: 6 total, 6 up, 6 in Feb 20 05:00:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v520: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 100 KiB/s rd, 70 KiB/s wr, 141 op/s Feb 20 05:00:44 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch Feb 20 05:00:44 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:44 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 05:00:44 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:44 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice_bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 05:00:44 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:00:44 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:44 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:44 localhost nova_compute[280804]: 2026-02-20 10:00:44.940 280808 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Feb 20 05:00:44 localhost nova_compute[280804]: 2026-02-20 10:00:44.941 280808 INFO nova.compute.manager [-] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] VM Stopped (Lifecycle Event)#033[00m Feb 20 05:00:44 localhost nova_compute[280804]: 2026-02-20 10:00:44.961 280808 DEBUG nova.compute.manager [None req-259e14df-3c59-41be-b047-2d8a5700a4a1 - - - - - -] [instance: 25d7d566-3a21-4292-a6ad-96dca2d2ec79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Feb 20 05:00:44 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:45 localhost nova_compute[280804]: 2026-02-20 10:00:45.026 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:45 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:45 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:45 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:45 localhost nova_compute[280804]: 2026-02-20 10:00:45.641 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 119 KiB/s rd, 94 KiB/s wr, 170 op/s Feb 20 05:00:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 05:00:45 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2664060444' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 05:00:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 05:00:45 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2664060444' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 05:00:46 localhost podman[241347]: time="2026-02-20T10:00:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 05:00:46 localhost podman[241347]: @ - - [20/Feb/2026:10:00:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 05:00:46 localhost podman[241347]: @ - - [20/Feb/2026:10:00:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18823 "" "Go-http-client/1.1" Feb 20 05:00:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e238 do_prune osdmap full prune enabled Feb 20 05:00:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e239 e239: 6 total, 6 up, 6 in Feb 20 05:00:47 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e239: 6 total, 6 up, 6 in Feb 20 05:00:47 localhost snmpd[69161]: empty variable list in _query Feb 20 05:00:47 localhost snmpd[69161]: empty variable list in _query Feb 20 05:00:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 111 KiB/s rd, 41 KiB/s wr, 153 op/s Feb 20 05:00:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:00:48 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 05:00:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Feb 20 05:00:48 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Feb 20 05:00:48 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 05:00:48 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 05:00:48 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Feb 20 05:00:48 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Feb 20 05:00:48 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Feb 20 05:00:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:48 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch Feb 20 05:00:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:48 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 05:00:48 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:48 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "65f6f73f-d610-45f7-b23e-11db4d12fdda", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:00:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:65f6f73f-d610-45f7-b23e-11db4d12fdda, vol_name:cephfs) < "" Feb 20 05:00:48 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/65f6f73f-d610-45f7-b23e-11db4d12fdda/.meta.tmp' Feb 20 05:00:48 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/65f6f73f-d610-45f7-b23e-11db4d12fdda/.meta.tmp' to config b'/volumes/_nogroup/65f6f73f-d610-45f7-b23e-11db4d12fdda/.meta' Feb 20 05:00:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:65f6f73f-d610-45f7-b23e-11db4d12fdda, vol_name:cephfs) < "" Feb 20 05:00:48 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "65f6f73f-d610-45f7-b23e-11db4d12fdda", "format": "json"}]: dispatch Feb 20 05:00:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:65f6f73f-d610-45f7-b23e-11db4d12fdda, vol_name:cephfs) < "" Feb 20 05:00:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:65f6f73f-d610-45f7-b23e-11db4d12fdda, vol_name:cephfs) < "" Feb 20 05:00:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 05:00:48 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 05:00:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 05:00:48 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 05:00:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 05:00:48 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:00:48 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev ded25243-6b04-4cab-a40d-c5b49986be90 (Updating node-proxy deployment (+3 -> 3)) Feb 20 05:00:48 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev ded25243-6b04-4cab-a40d-c5b49986be90 (Updating node-proxy deployment (+3 -> 3)) Feb 20 05:00:48 localhost ceph-mgr[286565]: [progress INFO root] Completed event ded25243-6b04-4cab-a40d-c5b49986be90 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 05:00:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 05:00:48 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 05:00:49 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 05:00:49 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:00:49 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 05:00:49 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3919460560' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 05:00:49 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 05:00:49 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3919460560' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 05:00:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 76 KiB/s wr, 62 op/s Feb 20 05:00:50 localhost nova_compute[280804]: 2026-02-20 10:00:50.029 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:50 localhost nova_compute[280804]: 2026-02-20 10:00:50.643 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:51 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:00:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v525: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 74 KiB/s wr, 129 op/s Feb 20 05:00:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 05:00:51 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:51 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 05:00:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:00:51 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:51 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:52 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:52 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:52 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:52 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 05:00:52 localhost podman[325881]: 2026-02-20 10:00:52.375159504 +0000 UTC m=+0.061951117 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 05:00:52 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:00:52 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:00:52 localhost nova_compute[280804]: 2026-02-20 10:00:52.661 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:00:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e239 do_prune osdmap full prune enabled Feb 20 05:00:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e240 e240: 6 total, 6 up, 6 in Feb 20 05:00:53 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e240: 6 total, 6 up, 6 in Feb 20 05:00:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 05:00:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 05:00:53 localhost sshd[325903]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0. Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.416419) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61 Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581653416505, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2662, "num_deletes": 273, "total_data_size": 2725978, "memory_usage": 2814832, "flush_reason": "Manual Compaction"} Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581653430962, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 2672399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33073, "largest_seqno": 35734, "table_properties": {"data_size": 2660779, "index_size": 7293, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 29325, "raw_average_key_size": 22, "raw_value_size": 2635779, "raw_average_value_size": 2044, "num_data_blocks": 311, "num_entries": 1289, "num_filter_entries": 1289, "num_deletions": 273, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581543, "oldest_key_time": 1771581543, "file_creation_time": 1771581653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}} Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 14632 microseconds, and 7238 cpu microseconds. Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.431047) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 2672399 bytes OK Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.431092) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.433867) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.433899) EVENT_LOG_v1 {"time_micros": 1771581653433889, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.433936) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2713719, prev total WAL file size 2713719, number of live WAL files 2. Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.435204) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end) Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(2609KB)], [60(18MB)] Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581653435279, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 21603625, "oldest_snapshot_seqno": -1} Feb 20 05:00:53 localhost systemd[1]: tmp-crun.7VIKaw.mount: Deactivated successfully. Feb 20 05:00:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:00:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:00:53 localhost podman[325904]: 2026-02-20 10:00:53.466943753 +0000 UTC m=+0.100861423 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 05:00:53 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "65f6f73f-d610-45f7-b23e-11db4d12fdda", "format": "json"}]: dispatch Feb 20 05:00:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:65f6f73f-d610-45f7-b23e-11db4d12fdda, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:65f6f73f-d610-45f7-b23e-11db4d12fdda, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:00:53 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '65f6f73f-d610-45f7-b23e-11db4d12fdda' of type subvolume Feb 20 05:00:53 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:00:53.482+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '65f6f73f-d610-45f7-b23e-11db4d12fdda' of type subvolume Feb 20 05:00:53 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "65f6f73f-d610-45f7-b23e-11db4d12fdda", "force": true, "format": "json"}]: dispatch Feb 20 05:00:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:65f6f73f-d610-45f7-b23e-11db4d12fdda, vol_name:cephfs) < "" Feb 20 05:00:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/65f6f73f-d610-45f7-b23e-11db4d12fdda'' moved to trashcan Feb 20 05:00:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:00:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:65f6f73f-d610-45f7-b23e-11db4d12fdda, vol_name:cephfs) < "" Feb 20 05:00:53 localhost podman[325905]: 2026-02-20 10:00:53.504094452 +0000 UTC m=+0.133412018 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 05:00:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:00:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 13873 keys, 19893889 bytes, temperature: kUnknown Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581653532364, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 19893889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19810209, "index_size": 47931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34693, "raw_key_size": 369885, "raw_average_key_size": 26, "raw_value_size": 19570119, "raw_average_value_size": 1410, "num_data_blocks": 1820, "num_entries": 13873, "num_filter_entries": 13873, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771581653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}} Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.532636) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 19893889 bytes Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.535471) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.3 rd, 204.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 18.1 +0.0 blob) out(19.0 +0.0 blob), read-write-amplify(15.5) write-amplify(7.4) OK, records in: 14429, records dropped: 556 output_compression: NoCompression Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.535500) EVENT_LOG_v1 {"time_micros": 1771581653535488, "job": 36, "event": "compaction_finished", "compaction_time_micros": 97163, "compaction_time_cpu_micros": 55420, "output_level": 6, "num_output_files": 1, "total_output_size": 19893889, "num_input_records": 14429, "num_output_records": 13873, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581653536113, "job": 36, "event": "table_file_deletion", "file_number": 62} Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581653539259, "job": 36, "event": "table_file_deletion", "file_number": 60} Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.435126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.539302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.539310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.539314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.539319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:00:53 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:00:53.539323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:00:53 localhost podman[325905]: 2026-02-20 10:00:53.55982557 +0000 UTC m=+0.189143156 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 05:00:53 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 05:00:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:00:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:00:53 localhost podman[325904]: 2026-02-20 10:00:53.603949127 +0000 UTC m=+0.237866777 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 05:00:53 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 05:00:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 60 KiB/s rd, 43 KiB/s wr, 83 op/s Feb 20 05:00:53 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 05:00:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 05:00:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:00:54 localhost sshd[325946]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:00:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 05:00:54 localhost podman[325948]: 2026-02-20 10:00:54.705407605 +0000 UTC m=+0.077193186 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 05:00:54 localhost podman[325948]: 2026-02-20 10:00:54.743769307 +0000 UTC m=+0.115554898 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 05:00:54 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 05:00:54 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:00:55 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 05:00:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:55 localhost nova_compute[280804]: 2026-02-20 10:00:55.032 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 05:00:55 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 20 05:00:55 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 05:00:55 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 05:00:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:55 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 05:00:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:55 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 05:00:55 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:00:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:00:55 localhost nova_compute[280804]: 2026-02-20 10:00:55.644 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:00:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 68 KiB/s wr, 83 op/s Feb 20 05:00:55 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:55 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 05:00:55 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 05:00:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 57 KiB/s wr, 69 op/s Feb 20 05:00:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:00:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e240 do_prune osdmap full prune enabled Feb 20 05:00:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e241 e241: 6 total, 6 up, 6 in Feb 20 05:00:58 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e241: 6 total, 6 up, 6 in Feb 20 05:00:58 localhost openstack_network_exporter[243776]: ERROR 10:00:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 05:00:58 localhost openstack_network_exporter[243776]: Feb 20 05:00:58 localhost openstack_network_exporter[243776]: ERROR 10:00:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 05:00:58 localhost openstack_network_exporter[243776]: Feb 20 05:00:58 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch Feb 20 05:00:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 05:00:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID alice bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 05:00:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:00:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:00:59 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:00:59 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:00:59 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:00:59 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e241 do_prune osdmap full prune enabled Feb 20 05:00:59 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e242 e242: 6 total, 6 up, 6 in Feb 20 05:00:59 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e242: 6 total, 6 up, 6 in Feb 20 05:00:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 156 B/s rd, 89 KiB/s wr, 8 op/s Feb 20 05:01:00 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:00 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, vol_name:cephfs) < "" Feb 20 05:01:00 localhost nova_compute[280804]: 2026-02-20 10:01:00.035 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:00 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta.tmp' Feb 20 05:01:00 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta.tmp' to config b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta' Feb 20 05:01:00 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, vol_name:cephfs) < "" Feb 20 05:01:00 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "format": "json"}]: dispatch Feb 20 05:01:00 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, vol_name:cephfs) < "" Feb 20 05:01:00 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, vol_name:cephfs) < "" Feb 20 05:01:00 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e242 do_prune osdmap full prune enabled Feb 20 05:01:00 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e243 e243: 6 total, 6 up, 6 in Feb 20 05:01:00 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e243: 6 total, 6 up, 6 in Feb 20 05:01:00 localhost nova_compute[280804]: 2026-02-20 10:01:00.648 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 63 KiB/s wr, 39 op/s Feb 20 05:01:01 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 05:01:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:01:01 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Feb 20 05:01:01 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:01:01 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Feb 20 05:01:01 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 05:01:01 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 05:01:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:01:01 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch Feb 20 05:01:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:01:01 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 05:01:01 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:01:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:01:02 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Feb 20 05:01:02 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Feb 20 05:01:02 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Feb 20 05:01:02 localhost nova_compute[280804]: 2026-02-20 10:01:02.498 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:02 localhost ovn_metadata_agent[161761]: 2026-02-20 10:01:02.498 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 05:01:02 localhost ovn_metadata_agent[161761]: 2026-02-20 10:01:02.500 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 05:01:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e243 do_prune osdmap full prune enabled Feb 20 05:01:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e244 e244: 6 total, 6 up, 6 in Feb 20 05:01:02 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e244: 6 total, 6 up, 6 in Feb 20 05:01:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0. Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.107501) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64 Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581663107568, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 445, "num_deletes": 250, "total_data_size": 274557, "memory_usage": 283408, "flush_reason": "Manual Compaction"} Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581663112214, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 270910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35735, "largest_seqno": 36179, "table_properties": {"data_size": 268258, "index_size": 699, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7400, "raw_average_key_size": 21, "raw_value_size": 262609, "raw_average_value_size": 746, "num_data_blocks": 31, "num_entries": 352, "num_filter_entries": 352, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581653, "oldest_key_time": 1771581653, "file_creation_time": 1771581663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}} Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 4763 microseconds, and 1953 cpu microseconds. Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.112262) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 270910 bytes OK Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.112284) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.114480) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.114508) EVENT_LOG_v1 {"time_micros": 1771581663114499, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.114531) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 271768, prev total WAL file size 272092, number of live WAL files 2. Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.115162) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303036' seq:72057594037927935, type:22 .. '6D6772737461740034323537' seq:0, type:0; will stop at (end) Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(264KB)], [63(18MB)] Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581663115205, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 20164799, "oldest_snapshot_seqno": -1} Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 13702 keys, 18046459 bytes, temperature: kUnknown Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581663197343, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 18046459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17968401, "index_size": 42693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34309, "raw_key_size": 366678, "raw_average_key_size": 26, "raw_value_size": 17735775, "raw_average_value_size": 1294, "num_data_blocks": 1601, "num_entries": 13702, "num_filter_entries": 13702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771581663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}} Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.197740) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 18046459 bytes Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.199989) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 245.0 rd, 219.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 19.0 +0.0 blob) out(17.2 +0.0 blob), read-write-amplify(141.0) write-amplify(66.6) OK, records in: 14225, records dropped: 523 output_compression: NoCompression Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.200020) EVENT_LOG_v1 {"time_micros": 1771581663200007, "job": 38, "event": "compaction_finished", "compaction_time_micros": 82318, "compaction_time_cpu_micros": 54121, "output_level": 6, "num_output_files": 1, "total_output_size": 18046459, "num_input_records": 14225, "num_output_records": 13702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581663200219, "job": 38, "event": "table_file_deletion", "file_number": 65} Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581663203445, "job": 38, "event": "table_file_deletion", "file_number": 63} Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.115118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.203558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.203568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.203572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.203576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:01:03 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:01:03.203579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:01:03 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "snap_name": "ba024a11-8c4d-4adf-9f1b-141c894e0dc3", "format": "json"}]: dispatch Feb 20 05:01:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ba024a11-8c4d-4adf-9f1b-141c894e0dc3, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, vol_name:cephfs) < "" Feb 20 05:01:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ba024a11-8c4d-4adf-9f1b-141c894e0dc3, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, vol_name:cephfs) < "" Feb 20 05:01:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 05:01:03 localhost podman[325985]: 2026-02-20 10:01:03.450063639 +0000 UTC m=+0.086203419 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 05:01:03 localhost podman[325985]: 2026-02-20 10:01:03.487879056 +0000 UTC m=+0.124018826 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 05:01:03 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 05:01:03 localhost nova_compute[280804]: 2026-02-20 10:01:03.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:01:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 69 KiB/s wr, 43 op/s Feb 20 05:01:04 localhost ovn_metadata_agent[161761]: 2026-02-20 10:01:04.501 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 05:01:04 localhost nova_compute[280804]: 2026-02-20 10:01:04.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:01:04 localhost nova_compute[280804]: 2026-02-20 10:01:04.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 05:01:04 localhost nova_compute[280804]: 2026-02-20 10:01:04.512 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 05:01:04 localhost nova_compute[280804]: 2026-02-20 10:01:04.532 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 05:01:04 localhost nova_compute[280804]: 2026-02-20 10:01:04.532 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:01:04 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:01:04 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:01:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 20 05:01:04 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 20 05:01:04 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID bob with tenant 8e04bc360fa14db4a793bc5de7a0a299 Feb 20 05:01:05 localhost nova_compute[280804]: 2026-02-20 10:01:05.037 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e244 do_prune osdmap full prune enabled Feb 20 05:01:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e245 e245: 6 total, 6 up, 6 in Feb 20 05:01:05 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e245: 6 total, 6 up, 6 in Feb 20 05:01:05 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:01:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:01:05 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:01:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:01:05 localhost nova_compute[280804]: 2026-02-20 10:01:05.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:01:05 localhost nova_compute[280804]: 2026-02-20 10:01:05.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 05:01:05 localhost nova_compute[280804]: 2026-02-20 10:01:05.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:01:05 localhost nova_compute[280804]: 2026-02-20 10:01:05.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Feb 20 05:01:05 localhost nova_compute[280804]: 2026-02-20 10:01:05.651 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 51 KiB/s wr, 99 op/s Feb 20 05:01:05 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 20 05:01:05 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:01:05 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:01:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:01:05.925 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:01:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:01:05.925 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:01:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:01:05.925 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:01:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "snap_name": "ba024a11-8c4d-4adf-9f1b-141c894e0dc3", "target_sub_name": "97e63579-f59d-4812-9af1-a8d227932ace", "format": "json"}]: dispatch Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:ba024a11-8c4d-4adf-9f1b-141c894e0dc3, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, target_sub_name:97e63579-f59d-4812-9af1-a8d227932ace, vol_name:cephfs) < "" Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/.meta.tmp' Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/.meta.tmp' to config b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/.meta' Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.clone_index] tracking-id c4d4956e-9362-47f8-971c-343e174eefd4 for path b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace' Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta.tmp' Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta.tmp' to config b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta' Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:ba024a11-8c4d-4adf-9f1b-141c894e0dc3, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, target_sub_name:97e63579-f59d-4812-9af1-a8d227932ace, vol_name:cephfs) < "" Feb 20 05:01:06 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "97e63579-f59d-4812-9af1-a8d227932ace", "format": "json"}]: dispatch Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:97e63579-f59d-4812-9af1-a8d227932ace, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:06 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:06.919+0000 7f7457cdf640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:06.919+0000 7f7457cdf640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:06.919+0000 7f7457cdf640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:06.919+0000 7f7457cdf640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:06.919+0000 7f7457cdf640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:97e63579-f59d-4812-9af1-a8d227932ace, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 97e63579-f59d-4812-9af1-a8d227932ace) Feb 20 05:01:06 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:06.949+0000 7f74584e0640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:06.949+0000 7f74584e0640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:06.949+0000 7f74584e0640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:06.949+0000 7f74584e0640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:06.949+0000 7f74584e0640 -1 client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-mgr[286565]: client.0 error registering admin socket command: (17) File exists Feb 20 05:01:06 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 97e63579-f59d-4812-9af1-a8d227932ace) -- by 0 seconds Feb 20 05:01:07 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/.meta.tmp' Feb 20 05:01:07 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/.meta.tmp' to config b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/.meta' Feb 20 05:01:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e245 do_prune osdmap full prune enabled Feb 20 05:01:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e246 e246: 6 total, 6 up, 6 in Feb 20 05:01:07 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e246: 6 total, 6 up, 6 in Feb 20 05:01:07 localhost nova_compute[280804]: 2026-02-20 10:01:07.520 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:01:07 localhost nova_compute[280804]: 2026-02-20 10:01:07.521 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:01:07 localhost nova_compute[280804]: 2026-02-20 10:01:07.521 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:01:07 localhost nova_compute[280804]: 2026-02-20 10:01:07.545 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:01:07 localhost nova_compute[280804]: 2026-02-20 10:01:07.545 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:01:07 localhost nova_compute[280804]: 2026-02-20 10:01:07.546 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:01:07 localhost nova_compute[280804]: 2026-02-20 10:01:07.546 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 05:01:07 localhost nova_compute[280804]: 2026-02-20 10:01:07.546 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:01:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 47 KiB/s wr, 65 op/s Feb 20 05:01:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:01:07 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1804254351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:01:07 localhost nova_compute[280804]: 2026-02-20 10:01:07.947 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:01:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:01:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e246 do_prune osdmap full prune enabled Feb 20 05:01:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e247 e247: 6 total, 6 up, 6 in Feb 20 05:01:08 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e247: 6 total, 6 up, 6 in Feb 20 05:01:08 localhost nova_compute[280804]: 2026-02-20 10:01:08.151 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 05:01:08 localhost nova_compute[280804]: 2026-02-20 10:01:08.152 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11376MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 05:01:08 localhost nova_compute[280804]: 2026-02-20 10:01:08.153 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:01:08 localhost nova_compute[280804]: 2026-02-20 10:01:08.153 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:01:08 localhost nova_compute[280804]: 2026-02-20 10:01:08.353 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 05:01:08 localhost nova_compute[280804]: 2026-02-20 10:01:08.355 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 05:01:08 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e52: np0005625202.arwxwo(active, since 12m), standbys: np0005625203.lonygy, np0005625204.exgrzx Feb 20 05:01:08 localhost nova_compute[280804]: 2026-02-20 10:01:08.502 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Refreshing inventories for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Feb 20 05:01:08 localhost nova_compute[280804]: 2026-02-20 10:01:08.663 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Updating ProviderTree inventory for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Feb 20 05:01:08 localhost nova_compute[280804]: 2026-02-20 10:01:08.664 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Updating inventory in ProviderTree for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Feb 20 05:01:08 localhost nova_compute[280804]: 2026-02-20 10:01:08.678 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Refreshing aggregate associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Feb 20 05:01:08 localhost nova_compute[280804]: 2026-02-20 10:01:08.697 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Refreshing trait associations for resource provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6, traits: COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Feb 20 05:01:08 localhost nova_compute[280804]: 2026-02-20 10:01:08.713 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:01:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 05:01:08 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1460565932' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 05:01:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 05:01:08 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1460565932' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 05:01:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:01:09 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3375971377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:01:09 localhost nova_compute[280804]: 2026-02-20 10:01:09.171 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:01:09 localhost nova_compute[280804]: 2026-02-20 10:01:09.179 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 05:01:09 localhost nova_compute[280804]: 2026-02-20 10:01:09.196 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 05:01:09 localhost nova_compute[280804]: 2026-02-20 10:01:09.222 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 05:01:09 localhost nova_compute[280804]: 2026-02-20 10:01:09.223 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:01:09 localhost nova_compute[280804]: 2026-02-20 10:01:09.224 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:01:09 localhost nova_compute[280804]: 2026-02-20 10:01:09.224 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Feb 20 05:01:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, vol_name:cephfs) < "" Feb 20 05:01:09 localhost nova_compute[280804]: 2026-02-20 10:01:09.247 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.snap/ba024a11-8c4d-4adf-9f1b-141c894e0dc3/e8a5d4a4-1a36-42cd-a26f-ca2c48bfdb7b' to b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/050c8a25-1299-433b-aa67-b8fc39f45604' Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d/.meta.tmp' Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d/.meta.tmp' to config b'/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d/.meta' Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, vol_name:cephfs) < "" Feb 20 05:01:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "format": "json"}]: dispatch Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, vol_name:cephfs) < "" Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/.meta.tmp' Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/.meta.tmp' to config b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/.meta' Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, vol_name:cephfs) < "" Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.clone_index] untracking c4d4956e-9362-47f8-971c-343e174eefd4 Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta.tmp' Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta.tmp' to config b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta' Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/.meta.tmp' Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/.meta.tmp' to config b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace/.meta' Feb 20 05:01:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 97e63579-f59d-4812-9af1-a8d227932ace) Feb 20 05:01:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 130 KiB/s wr, 72 op/s Feb 20 05:01:10 localhost nova_compute[280804]: 2026-02-20 10:01:10.067 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:10 localhost nova_compute[280804]: 2026-02-20 10:01:10.653 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:11 localhost nova_compute[280804]: 2026-02-20 10:01:11.238 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:01:11 localhost nova_compute[280804]: 2026-02-20 10:01:11.239 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:01:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 80 KiB/s wr, 53 op/s Feb 20 05:01:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "auth_id": "bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:01:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:01:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 20 05:01:12 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 20 05:01:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb,allow rw path=/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d/d3136894-05be-4407-a87e-f697b6b3efd1", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5,allow rw pool=manila_data namespace=fsvolumens_8be201ef-8dd5-4872-91e4-0290b94f4b6d"]} v 0) Feb 20 05:01:12 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb,allow rw path=/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d/d3136894-05be-4407-a87e-f697b6b3efd1", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5,allow rw pool=manila_data namespace=fsvolumens_8be201ef-8dd5-4872-91e4-0290b94f4b6d"]} : dispatch Feb 20 05:01:12 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb,allow rw path=/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d/d3136894-05be-4407-a87e-f697b6b3efd1", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5,allow rw pool=manila_data namespace=fsvolumens_8be201ef-8dd5-4872-91e4-0290b94f4b6d"]}]': finished Feb 20 05:01:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 20 05:01:12 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 20 05:01:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, tenant_id:8e04bc360fa14db4a793bc5de7a0a299, vol_name:cephfs) < "" Feb 20 05:01:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:01:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e247 do_prune osdmap full prune enabled Feb 20 05:01:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e248 e248: 6 total, 6 up, 6 in Feb 20 05:01:13 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e248: 6 total, 6 up, 6 in Feb 20 05:01:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 05:01:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 05:01:13 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 20 05:01:13 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb,allow rw path=/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d/d3136894-05be-4407-a87e-f697b6b3efd1", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5,allow rw pool=manila_data namespace=fsvolumens_8be201ef-8dd5-4872-91e4-0290b94f4b6d"]} : dispatch Feb 20 05:01:13 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb,allow rw path=/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d/d3136894-05be-4407-a87e-f697b6b3efd1", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5,allow rw pool=manila_data namespace=fsvolumens_8be201ef-8dd5-4872-91e4-0290b94f4b6d"]}]': finished Feb 20 05:01:13 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 20 05:01:13 localhost podman[326078]: 2026-02-20 10:01:13.45749994 +0000 UTC m=+0.094249375 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.7, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, release=1770267347, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal) Feb 20 05:01:13 localhost podman[326078]: 2026-02-20 10:01:13.499882449 +0000 UTC m=+0.136631864 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, vcs-type=git, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, version=9.7, name=ubi9/ubi-minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Feb 20 05:01:13 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 05:01:13 localhost podman[326079]: 2026-02-20 10:01:13.514435452 +0000 UTC m=+0.146087241 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Feb 20 05:01:13 localhost podman[326079]: 2026-02-20 10:01:13.529783964 +0000 UTC m=+0.161435803 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute) Feb 20 05:01:13 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 05:01:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 81 KiB/s wr, 54 op/s Feb 20 05:01:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e248 do_prune osdmap full prune enabled Feb 20 05:01:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e249 e249: 6 total, 6 up, 6 in Feb 20 05:01:14 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e249: 6 total, 6 up, 6 in Feb 20 05:01:14 localhost sshd[326115]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:01:15 localhost nova_compute[280804]: 2026-02-20 10:01:15.071 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e249 do_prune osdmap full prune enabled Feb 20 05:01:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e250 e250: 6 total, 6 up, 6 in Feb 20 05:01:15 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e250: 6 total, 6 up, 6 in Feb 20 05:01:15 localhost nova_compute[280804]: 2026-02-20 10:01:15.657 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v548: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 35 KiB/s wr, 71 op/s Feb 20 05:01:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "auth_id": "bob", "format": "json"}]: dispatch Feb 20 05:01:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, vol_name:cephfs) < "" Feb 20 05:01:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 20 05:01:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 20 05:01:15 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5"]} v 0) Feb 20 05:01:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5"]} : dispatch Feb 20 05:01:15 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5"]}]': finished Feb 20 05:01:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, vol_name:cephfs) < "" Feb 20 05:01:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "auth_id": "bob", "format": "json"}]: dispatch Feb 20 05:01:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, vol_name:cephfs) < "" Feb 20 05:01:15 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d/d3136894-05be-4407-a87e-f697b6b3efd1 Feb 20 05:01:15 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:01:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, vol_name:cephfs) < "" Feb 20 05:01:16 localhost podman[241347]: time="2026-02-20T10:01:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 05:01:16 localhost podman[241347]: @ - - [20/Feb/2026:10:01:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 05:01:16 localhost podman[241347]: @ - - [20/Feb/2026:10:01:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18829 "" "Go-http-client/1.1" Feb 20 05:01:16 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 20 05:01:16 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5"]} : dispatch Feb 20 05:01:16 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5"]}]': finished Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.271 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.272 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.272 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.272 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.274 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.274 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.274 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.274 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.274 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.274 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.274 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.275 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.275 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.275 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.275 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.275 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.275 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:01:17.275 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:01:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 31 KiB/s wr, 21 op/s Feb 20 05:01:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:01:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "bob", "format": "json"}]: dispatch Feb 20 05:01:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:01:19 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Feb 20 05:01:19 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 20 05:01:19 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) Feb 20 05:01:19 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Feb 20 05:01:19 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Feb 20 05:01:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:01:19 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "bob", "format": "json"}]: dispatch Feb 20 05:01:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:01:19 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb Feb 20 05:01:19 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:01:19 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:01:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 84 KiB/s wr, 24 op/s Feb 20 05:01:20 localhost nova_compute[280804]: 2026-02-20 10:01:20.073 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:20 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Feb 20 05:01:20 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Feb 20 05:01:20 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Feb 20 05:01:20 localhost nova_compute[280804]: 2026-02-20 10:01:20.660 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:21 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, vol_name:cephfs) < "" Feb 20 05:01:21 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afaf9e02-272e-4ffc-b272-cdee923c5a1d/.meta.tmp' Feb 20 05:01:21 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afaf9e02-272e-4ffc-b272-cdee923c5a1d/.meta.tmp' to config b'/volumes/_nogroup/afaf9e02-272e-4ffc-b272-cdee923c5a1d/.meta' Feb 20 05:01:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, vol_name:cephfs) < "" Feb 20 05:01:21 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "format": "json"}]: dispatch Feb 20 05:01:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, vol_name:cephfs) < "" Feb 20 05:01:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, vol_name:cephfs) < "" Feb 20 05:01:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 71 KiB/s wr, 55 op/s Feb 20 05:01:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "format": "json"}]: dispatch Feb 20 05:01:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:22 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8be201ef-8dd5-4872-91e4-0290b94f4b6d' of type subvolume Feb 20 05:01:22 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:22.719+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8be201ef-8dd5-4872-91e4-0290b94f4b6d' of type subvolume Feb 20 05:01:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "force": true, "format": "json"}]: dispatch Feb 20 05:01:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, vol_name:cephfs) < "" Feb 20 05:01:22 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d'' moved to trashcan Feb 20 05:01:22 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8be201ef-8dd5-4872-91e4-0290b94f4b6d, vol_name:cephfs) < "" Feb 20 05:01:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:01:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e250 do_prune osdmap full prune enabled Feb 20 05:01:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e251 e251: 6 total, 6 up, 6 in Feb 20 05:01:23 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e251: 6 total, 6 up, 6 in Feb 20 05:01:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_10:01:23 Feb 20 05:01:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 05:01:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 05:01:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['manila_data', 'manila_metadata', 'images', 'vms', '.mgr', 'backups', 'volumes'] Feb 20 05:01:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 05:01:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:01:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:01:23 localhost nova_compute[280804]: 2026-02-20 10:01:23.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:01:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:01:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:01:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:01:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:01:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 46 KiB/s wr, 38 op/s Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.1810441094360693e-06 of space, bias 1.0, pg target 0.00043402777777777775 quantized to 32 (current 32) Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:01:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0010774357900614183 of space, bias 4.0, pg target 0.857638888888889 quantized to 16 (current 16) Feb 20 05:01:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 05:01:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 05:01:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 05:01:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 05:01:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 05:01:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 05:01:23 localhost ovn_controller[155916]: 2026-02-20T10:01:23Z|00268|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Feb 20 05:01:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 05:01:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 05:01:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 05:01:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 05:01:24 localhost sshd[326119]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:01:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 05:01:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 05:01:24 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "snap_name": "607e1fd9-dd9b-4387-8d26-3ad0a4a7fc6e", "format": "json"}]: dispatch Feb 20 05:01:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:607e1fd9-dd9b-4387-8d26-3ad0a4a7fc6e, sub_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, vol_name:cephfs) < "" Feb 20 05:01:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:607e1fd9-dd9b-4387-8d26-3ad0a4a7fc6e, sub_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, vol_name:cephfs) < "" Feb 20 05:01:24 localhost podman[326122]: 2026-02-20 10:01:24.759612267 +0000 UTC m=+0.352387576 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 05:01:24 localhost podman[326121]: 2026-02-20 10:01:24.727685708 +0000 UTC m=+0.324641510 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 20 05:01:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 05:01:24 localhost podman[326122]: 2026-02-20 10:01:24.792886202 +0000 UTC m=+0.385661551 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Feb 20 05:01:24 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 05:01:24 localhost podman[326121]: 2026-02-20 10:01:24.810934687 +0000 UTC m=+0.407890559 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 20 05:01:24 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 05:01:24 localhost podman[326162]: 2026-02-20 10:01:24.866250675 +0000 UTC m=+0.075562463 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 05:01:24 localhost podman[326162]: 2026-02-20 10:01:24.883971911 +0000 UTC m=+0.093283679 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 05:01:24 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 05:01:25 localhost nova_compute[280804]: 2026-02-20 10:01:25.075 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:25 localhost nova_compute[280804]: 2026-02-20 10:01:25.662 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v554: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 57 KiB/s wr, 33 op/s Feb 20 05:01:25 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "format": "json"}]: dispatch Feb 20 05:01:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:500c5eac-e7f6-4365-974d-923d6976cbd5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:500c5eac-e7f6-4365-974d-923d6976cbd5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:25 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '500c5eac-e7f6-4365-974d-923d6976cbd5' of type subvolume Feb 20 05:01:25 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:25.980+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '500c5eac-e7f6-4365-974d-923d6976cbd5' of type subvolume Feb 20 05:01:25 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "force": true, "format": "json"}]: dispatch Feb 20 05:01:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:01:25 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5'' moved to trashcan Feb 20 05:01:25 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:500c5eac-e7f6-4365-974d-923d6976cbd5, vol_name:cephfs) < "" Feb 20 05:01:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 57 KiB/s wr, 33 op/s Feb 20 05:01:27 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "snap_name": "607e1fd9-dd9b-4387-8d26-3ad0a4a7fc6e_7d12ad99-df51-4085-a46f-32f38bb8f276", "force": true, "format": "json"}]: dispatch Feb 20 05:01:27 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:607e1fd9-dd9b-4387-8d26-3ad0a4a7fc6e_7d12ad99-df51-4085-a46f-32f38bb8f276, sub_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, vol_name:cephfs) < "" Feb 20 05:01:27 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afaf9e02-272e-4ffc-b272-cdee923c5a1d/.meta.tmp' Feb 20 05:01:27 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afaf9e02-272e-4ffc-b272-cdee923c5a1d/.meta.tmp' to config b'/volumes/_nogroup/afaf9e02-272e-4ffc-b272-cdee923c5a1d/.meta' Feb 20 05:01:27 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:607e1fd9-dd9b-4387-8d26-3ad0a4a7fc6e_7d12ad99-df51-4085-a46f-32f38bb8f276, sub_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, vol_name:cephfs) < "" Feb 20 05:01:27 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "snap_name": "607e1fd9-dd9b-4387-8d26-3ad0a4a7fc6e", "force": true, "format": "json"}]: dispatch Feb 20 05:01:27 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:607e1fd9-dd9b-4387-8d26-3ad0a4a7fc6e, sub_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, vol_name:cephfs) < "" Feb 20 05:01:28 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afaf9e02-272e-4ffc-b272-cdee923c5a1d/.meta.tmp' Feb 20 05:01:28 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afaf9e02-272e-4ffc-b272-cdee923c5a1d/.meta.tmp' to config b'/volumes/_nogroup/afaf9e02-272e-4ffc-b272-cdee923c5a1d/.meta' Feb 20 05:01:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:607e1fd9-dd9b-4387-8d26-3ad0a4a7fc6e, sub_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, vol_name:cephfs) < "" Feb 20 05:01:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:01:28 localhost openstack_network_exporter[243776]: ERROR 10:01:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 05:01:28 localhost openstack_network_exporter[243776]: Feb 20 05:01:28 localhost openstack_network_exporter[243776]: ERROR 10:01:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 05:01:28 localhost openstack_network_exporter[243776]: Feb 20 05:01:28 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:01:28 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/.meta.tmp' Feb 20 05:01:28 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/.meta.tmp' to config b'/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/.meta' Feb 20 05:01:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:01:28 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "format": "json"}]: dispatch Feb 20 05:01:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:01:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:01:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 53 KiB/s wr, 33 op/s Feb 20 05:01:30 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:30 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:27c01f2a-f558-4311-a88e-0bf402c95817, vol_name:cephfs) < "" Feb 20 05:01:30 localhost nova_compute[280804]: 2026-02-20 10:01:30.077 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:30 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/27c01f2a-f558-4311-a88e-0bf402c95817/.meta.tmp' Feb 20 05:01:30 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/27c01f2a-f558-4311-a88e-0bf402c95817/.meta.tmp' to config b'/volumes/_nogroup/27c01f2a-f558-4311-a88e-0bf402c95817/.meta' Feb 20 05:01:30 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:27c01f2a-f558-4311-a88e-0bf402c95817, vol_name:cephfs) < "" Feb 20 05:01:30 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "format": "json"}]: dispatch Feb 20 05:01:30 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:27c01f2a-f558-4311-a88e-0bf402c95817, vol_name:cephfs) < "" Feb 20 05:01:30 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:27c01f2a-f558-4311-a88e-0bf402c95817, vol_name:cephfs) < "" Feb 20 05:01:30 localhost nova_compute[280804]: 2026-02-20 10:01:30.666 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:31 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "format": "json"}]: dispatch Feb 20 05:01:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:31 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:31.288+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'afaf9e02-272e-4ffc-b272-cdee923c5a1d' of type subvolume Feb 20 05:01:31 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'afaf9e02-272e-4ffc-b272-cdee923c5a1d' of type subvolume Feb 20 05:01:31 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "force": true, "format": "json"}]: dispatch Feb 20 05:01:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, vol_name:cephfs) < "" Feb 20 05:01:31 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/afaf9e02-272e-4ffc-b272-cdee923c5a1d'' moved to trashcan Feb 20 05:01:31 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:afaf9e02-272e-4ffc-b272-cdee923c5a1d, vol_name:cephfs) < "" Feb 20 05:01:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 52 KiB/s wr, 6 op/s Feb 20 05:01:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3176bf32-687c-42ee-9751-803dd8a1fea3, vol_name:cephfs) < "" Feb 20 05:01:32 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3/.meta.tmp' Feb 20 05:01:32 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3/.meta.tmp' to config b'/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3/.meta' Feb 20 05:01:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3176bf32-687c-42ee-9751-803dd8a1fea3, vol_name:cephfs) < "" Feb 20 05:01:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "format": "json"}]: dispatch Feb 20 05:01:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3176bf32-687c-42ee-9751-803dd8a1fea3, vol_name:cephfs) < "" Feb 20 05:01:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3176bf32-687c-42ee-9751-803dd8a1fea3, vol_name:cephfs) < "" Feb 20 05:01:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e251 do_prune osdmap full prune enabled Feb 20 05:01:32 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e252 e252: 6 total, 6 up, 6 in Feb 20 05:01:32 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e252: 6 total, 6 up, 6 in Feb 20 05:01:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:01:33 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "snap_name": "ddab1265-f000-456b-af11-3f6f3cbbac23", "format": "json"}]: dispatch Feb 20 05:01:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ddab1265-f000-456b-af11-3f6f3cbbac23, sub_name:27c01f2a-f558-4311-a88e-0bf402c95817, vol_name:cephfs) < "" Feb 20 05:01:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ddab1265-f000-456b-af11-3f6f3cbbac23, sub_name:27c01f2a-f558-4311-a88e-0bf402c95817, vol_name:cephfs) < "" Feb 20 05:01:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 52 KiB/s wr, 6 op/s Feb 20 05:01:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 05:01:34 localhost systemd[1]: tmp-crun.HBCyxl.mount: Deactivated successfully. Feb 20 05:01:34 localhost podman[326185]: 2026-02-20 10:01:34.451900915 +0000 UTC m=+0.091359019 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 05:01:34 localhost podman[326185]: 2026-02-20 10:01:34.467126014 +0000 UTC m=+0.106584148 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 05:01:34 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 05:01:34 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9ee8c4a3-7471-4713-8c2c-cbe769628e3f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:34 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9ee8c4a3-7471-4713-8c2c-cbe769628e3f, vol_name:cephfs) < "" Feb 20 05:01:34 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9ee8c4a3-7471-4713-8c2c-cbe769628e3f/.meta.tmp' Feb 20 05:01:34 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9ee8c4a3-7471-4713-8c2c-cbe769628e3f/.meta.tmp' to config b'/volumes/_nogroup/9ee8c4a3-7471-4713-8c2c-cbe769628e3f/.meta' Feb 20 05:01:34 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9ee8c4a3-7471-4713-8c2c-cbe769628e3f, vol_name:cephfs) < "" Feb 20 05:01:34 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9ee8c4a3-7471-4713-8c2c-cbe769628e3f", "format": "json"}]: dispatch Feb 20 05:01:34 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9ee8c4a3-7471-4713-8c2c-cbe769628e3f, vol_name:cephfs) < "" Feb 20 05:01:34 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9ee8c4a3-7471-4713-8c2c-cbe769628e3f, vol_name:cephfs) < "" Feb 20 05:01:35 localhost nova_compute[280804]: 2026-02-20 10:01:35.080 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:35 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:01:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:3176bf32-687c-42ee-9751-803dd8a1fea3, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:01:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:01:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:01:35 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID tempest-cephx-id-408485567 with tenant ad1dd0d43ec7421b9a4a06c39e3657c6 Feb 20 05:01:35 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3/40c90f7c-2187-4f07-808e-7e98f84f2bd9", "osd", "allow rw pool=manila_data namespace=fsvolumens_3176bf32-687c-42ee-9751-803dd8a1fea3", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:01:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3/40c90f7c-2187-4f07-808e-7e98f84f2bd9", "osd", "allow rw pool=manila_data namespace=fsvolumens_3176bf32-687c-42ee-9751-803dd8a1fea3", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:01:35 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3/40c90f7c-2187-4f07-808e-7e98f84f2bd9", "osd", "allow rw pool=manila_data namespace=fsvolumens_3176bf32-687c-42ee-9751-803dd8a1fea3", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:01:35 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:01:35 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3/40c90f7c-2187-4f07-808e-7e98f84f2bd9", "osd", "allow rw pool=manila_data namespace=fsvolumens_3176bf32-687c-42ee-9751-803dd8a1fea3", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:01:35 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3/40c90f7c-2187-4f07-808e-7e98f84f2bd9", "osd", "allow rw pool=manila_data namespace=fsvolumens_3176bf32-687c-42ee-9751-803dd8a1fea3", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:01:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:3176bf32-687c-42ee-9751-803dd8a1fea3, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:01:35 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "04824cf7-cd0e-41ba-8d21-04797c7215a6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:04824cf7-cd0e-41ba-8d21-04797c7215a6, vol_name:cephfs) < "" Feb 20 05:01:35 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/04824cf7-cd0e-41ba-8d21-04797c7215a6/.meta.tmp' Feb 20 05:01:35 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/04824cf7-cd0e-41ba-8d21-04797c7215a6/.meta.tmp' to config b'/volumes/_nogroup/04824cf7-cd0e-41ba-8d21-04797c7215a6/.meta' Feb 20 05:01:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:04824cf7-cd0e-41ba-8d21-04797c7215a6, vol_name:cephfs) < "" Feb 20 05:01:35 localhost nova_compute[280804]: 2026-02-20 10:01:35.669 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:35 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "04824cf7-cd0e-41ba-8d21-04797c7215a6", "format": "json"}]: dispatch Feb 20 05:01:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:04824cf7-cd0e-41ba-8d21-04797c7215a6, vol_name:cephfs) < "" Feb 20 05:01:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v560: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 75 KiB/s wr, 7 op/s Feb 20 05:01:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:04824cf7-cd0e-41ba-8d21-04797c7215a6, vol_name:cephfs) < "" Feb 20 05:01:37 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "16497a5d-c9a5-4ff1-bddc-189b04ad6dac", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:16497a5d-c9a5-4ff1-bddc-189b04ad6dac, vol_name:cephfs) < "" Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/16497a5d-c9a5-4ff1-bddc-189b04ad6dac/.meta.tmp' Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/16497a5d-c9a5-4ff1-bddc-189b04ad6dac/.meta.tmp' to config b'/volumes/_nogroup/16497a5d-c9a5-4ff1-bddc-189b04ad6dac/.meta' Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:16497a5d-c9a5-4ff1-bddc-189b04ad6dac, vol_name:cephfs) < "" Feb 20 05:01:37 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "16497a5d-c9a5-4ff1-bddc-189b04ad6dac", "format": "json"}]: dispatch Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:16497a5d-c9a5-4ff1-bddc-189b04ad6dac, vol_name:cephfs) < "" Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:16497a5d-c9a5-4ff1-bddc-189b04ad6dac, vol_name:cephfs) < "" Feb 20 05:01:37 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "snap_name": "ddab1265-f000-456b-af11-3f6f3cbbac23_b97127c3-1060-4d01-b54f-99018ca9ac64", "force": true, "format": "json"}]: dispatch Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ddab1265-f000-456b-af11-3f6f3cbbac23_b97127c3-1060-4d01-b54f-99018ca9ac64, sub_name:27c01f2a-f558-4311-a88e-0bf402c95817, vol_name:cephfs) < "" Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/27c01f2a-f558-4311-a88e-0bf402c95817/.meta.tmp' Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/27c01f2a-f558-4311-a88e-0bf402c95817/.meta.tmp' to config b'/volumes/_nogroup/27c01f2a-f558-4311-a88e-0bf402c95817/.meta' Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ddab1265-f000-456b-af11-3f6f3cbbac23_b97127c3-1060-4d01-b54f-99018ca9ac64, sub_name:27c01f2a-f558-4311-a88e-0bf402c95817, vol_name:cephfs) < "" Feb 20 05:01:37 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "snap_name": "ddab1265-f000-456b-af11-3f6f3cbbac23", "force": true, "format": "json"}]: dispatch Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ddab1265-f000-456b-af11-3f6f3cbbac23, sub_name:27c01f2a-f558-4311-a88e-0bf402c95817, vol_name:cephfs) < "" Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/27c01f2a-f558-4311-a88e-0bf402c95817/.meta.tmp' Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/27c01f2a-f558-4311-a88e-0bf402c95817/.meta.tmp' to config b'/volumes/_nogroup/27c01f2a-f558-4311-a88e-0bf402c95817/.meta' Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ddab1265-f000-456b-af11-3f6f3cbbac23, sub_name:27c01f2a-f558-4311-a88e-0bf402c95817, vol_name:cephfs) < "" Feb 20 05:01:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 75 KiB/s wr, 7 op/s Feb 20 05:01:37 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9ee8c4a3-7471-4713-8c2c-cbe769628e3f", "format": "json"}]: dispatch Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9ee8c4a3-7471-4713-8c2c-cbe769628e3f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9ee8c4a3-7471-4713-8c2c-cbe769628e3f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:37 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:37.888+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9ee8c4a3-7471-4713-8c2c-cbe769628e3f' of type subvolume Feb 20 05:01:37 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9ee8c4a3-7471-4713-8c2c-cbe769628e3f' of type subvolume Feb 20 05:01:37 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9ee8c4a3-7471-4713-8c2c-cbe769628e3f", "force": true, "format": "json"}]: dispatch Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9ee8c4a3-7471-4713-8c2c-cbe769628e3f, vol_name:cephfs) < "" Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9ee8c4a3-7471-4713-8c2c-cbe769628e3f'' moved to trashcan Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9ee8c4a3-7471-4713-8c2c-cbe769628e3f, vol_name:cephfs) < "" Feb 20 05:01:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:01:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e252 do_prune osdmap full prune enabled Feb 20 05:01:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e253 e253: 6 total, 6 up, 6 in Feb 20 05:01:38 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e253: 6 total, 6 up, 6 in Feb 20 05:01:38 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "97e63579-f59d-4812-9af1-a8d227932ace", "format": "json"}]: dispatch Feb 20 05:01:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:97e63579-f59d-4812-9af1-a8d227932ace, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:97e63579-f59d-4812-9af1-a8d227932ace, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:38 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "97e63579-f59d-4812-9af1-a8d227932ace", "format": "json"}]: dispatch Feb 20 05:01:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:97e63579-f59d-4812-9af1-a8d227932ace, vol_name:cephfs) < "" Feb 20 05:01:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:97e63579-f59d-4812-9af1-a8d227932ace, vol_name:cephfs) < "" Feb 20 05:01:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e253 do_prune osdmap full prune enabled Feb 20 05:01:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e254 e254: 6 total, 6 up, 6 in Feb 20 05:01:39 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e254: 6 total, 6 up, 6 in Feb 20 05:01:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:3176bf32-687c-42ee-9751-803dd8a1fea3, vol_name:cephfs) < "" Feb 20 05:01:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:01:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:01:39 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} v 0) Feb 20 05:01:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:01:39 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:01:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:01:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:01:39 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:3176bf32-687c-42ee-9751-803dd8a1fea3, vol_name:cephfs) < "" Feb 20 05:01:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:3176bf32-687c-42ee-9751-803dd8a1fea3, vol_name:cephfs) < "" Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-408485567, client_metadata.root=/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3/40c90f7c-2187-4f07-808e-7e98f84f2bd9 Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:3176bf32-687c-42ee-9751-803dd8a1fea3, vol_name:cephfs) < "" Feb 20 05:01:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "04824cf7-cd0e-41ba-8d21-04797c7215a6", "format": "json"}]: dispatch Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:04824cf7-cd0e-41ba-8d21-04797c7215a6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:04824cf7-cd0e-41ba-8d21-04797c7215a6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:39 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:39.594+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '04824cf7-cd0e-41ba-8d21-04797c7215a6' of type subvolume Feb 20 05:01:39 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '04824cf7-cd0e-41ba-8d21-04797c7215a6' of type subvolume Feb 20 05:01:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "04824cf7-cd0e-41ba-8d21-04797c7215a6", "force": true, "format": "json"}]: dispatch Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:04824cf7-cd0e-41ba-8d21-04797c7215a6, vol_name:cephfs) < "" Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/04824cf7-cd0e-41ba-8d21-04797c7215a6'' moved to trashcan Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:04824cf7-cd0e-41ba-8d21-04797c7215a6, vol_name:cephfs) < "" Feb 20 05:01:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "format": "json"}]: dispatch Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3176bf32-687c-42ee-9751-803dd8a1fea3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3176bf32-687c-42ee-9751-803dd8a1fea3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:39 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:39.677+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3176bf32-687c-42ee-9751-803dd8a1fea3' of type subvolume Feb 20 05:01:39 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3176bf32-687c-42ee-9751-803dd8a1fea3' of type subvolume Feb 20 05:01:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v564: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 280 B/s rd, 57 KiB/s wr, 5 op/s Feb 20 05:01:39 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "force": true, "format": "json"}]: dispatch Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3176bf32-687c-42ee-9751-803dd8a1fea3, vol_name:cephfs) < "" Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3'' moved to trashcan Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:39 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3176bf32-687c-42ee-9751-803dd8a1fea3, vol_name:cephfs) < "" Feb 20 05:01:40 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "format": "json"}]: dispatch Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:27c01f2a-f558-4311-a88e-0bf402c95817, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:27c01f2a-f558-4311-a88e-0bf402c95817, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:40 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:40.120+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '27c01f2a-f558-4311-a88e-0bf402c95817' of type subvolume Feb 20 05:01:40 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '27c01f2a-f558-4311-a88e-0bf402c95817' of type subvolume Feb 20 05:01:40 localhost nova_compute[280804]: 2026-02-20 10:01:40.126 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:40 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "force": true, "format": "json"}]: dispatch Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:27c01f2a-f558-4311-a88e-0bf402c95817, vol_name:cephfs) < "" Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/27c01f2a-f558-4311-a88e-0bf402c95817'' moved to trashcan Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:27c01f2a-f558-4311-a88e-0bf402c95817, vol_name:cephfs) < "" Feb 20 05:01:40 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8782d0aa-7804-4d19-9c02-5e9c58554cbc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8782d0aa-7804-4d19-9c02-5e9c58554cbc, vol_name:cephfs) < "" Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8782d0aa-7804-4d19-9c02-5e9c58554cbc/.meta.tmp' Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8782d0aa-7804-4d19-9c02-5e9c58554cbc/.meta.tmp' to config b'/volumes/_nogroup/8782d0aa-7804-4d19-9c02-5e9c58554cbc/.meta' Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8782d0aa-7804-4d19-9c02-5e9c58554cbc, vol_name:cephfs) < "" Feb 20 05:01:40 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8782d0aa-7804-4d19-9c02-5e9c58554cbc", "format": "json"}]: dispatch Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8782d0aa-7804-4d19-9c02-5e9c58554cbc, vol_name:cephfs) < "" Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8782d0aa-7804-4d19-9c02-5e9c58554cbc, vol_name:cephfs) < "" Feb 20 05:01:40 localhost nova_compute[280804]: 2026-02-20 10:01:40.675 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:40 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "16497a5d-c9a5-4ff1-bddc-189b04ad6dac", "format": "json"}]: dispatch Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:16497a5d-c9a5-4ff1-bddc-189b04ad6dac, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:16497a5d-c9a5-4ff1-bddc-189b04ad6dac, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:40 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:40.753+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '16497a5d-c9a5-4ff1-bddc-189b04ad6dac' of type subvolume Feb 20 05:01:40 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '16497a5d-c9a5-4ff1-bddc-189b04ad6dac' of type subvolume Feb 20 05:01:40 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "16497a5d-c9a5-4ff1-bddc-189b04ad6dac", "force": true, "format": "json"}]: dispatch Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:16497a5d-c9a5-4ff1-bddc-189b04ad6dac, vol_name:cephfs) < "" Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/16497a5d-c9a5-4ff1-bddc-189b04ad6dac'' moved to trashcan Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:40 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:16497a5d-c9a5-4ff1-bddc-189b04ad6dac, vol_name:cephfs) < "" Feb 20 05:01:41 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:41 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, vol_name:cephfs) < "" Feb 20 05:01:41 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta.tmp' Feb 20 05:01:41 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta.tmp' to config b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta' Feb 20 05:01:41 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, vol_name:cephfs) < "" Feb 20 05:01:41 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "format": "json"}]: dispatch Feb 20 05:01:41 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, vol_name:cephfs) < "" Feb 20 05:01:41 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, vol_name:cephfs) < "" Feb 20 05:01:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 151 KiB/s wr, 13 op/s Feb 20 05:01:41 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:41 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, vol_name:cephfs) < "" Feb 20 05:01:41 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f/.meta.tmp' Feb 20 05:01:41 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f/.meta.tmp' to config b'/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f/.meta' Feb 20 05:01:41 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, vol_name:cephfs) < "" Feb 20 05:01:41 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "format": "json"}]: dispatch Feb 20 05:01:41 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, vol_name:cephfs) < "" Feb 20 05:01:41 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, vol_name:cephfs) < "" Feb 20 05:01:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58028d35-31fb-4b5c-a80b-11eaf91192f8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58028d35-31fb-4b5c-a80b-11eaf91192f8, vol_name:cephfs) < "" Feb 20 05:01:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/58028d35-31fb-4b5c-a80b-11eaf91192f8/.meta.tmp' Feb 20 05:01:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/58028d35-31fb-4b5c-a80b-11eaf91192f8/.meta.tmp' to config b'/volumes/_nogroup/58028d35-31fb-4b5c-a80b-11eaf91192f8/.meta' Feb 20 05:01:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58028d35-31fb-4b5c-a80b-11eaf91192f8, vol_name:cephfs) < "" Feb 20 05:01:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58028d35-31fb-4b5c-a80b-11eaf91192f8", "format": "json"}]: dispatch Feb 20 05:01:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58028d35-31fb-4b5c-a80b-11eaf91192f8, vol_name:cephfs) < "" Feb 20 05:01:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58028d35-31fb-4b5c-a80b-11eaf91192f8, vol_name:cephfs) < "" Feb 20 05:01:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:01:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e254 do_prune osdmap full prune enabled Feb 20 05:01:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e255 e255: 6 total, 6 up, 6 in Feb 20 05:01:43 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e255: 6 total, 6 up, 6 in Feb 20 05:01:43 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "8782d0aa-7804-4d19-9c02-5e9c58554cbc", "new_size": 2147483648, "format": "json"}]: dispatch Feb 20 05:01:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:8782d0aa-7804-4d19-9c02-5e9c58554cbc, vol_name:cephfs) < "" Feb 20 05:01:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:8782d0aa-7804-4d19-9c02-5e9c58554cbc, vol_name:cephfs) < "" Feb 20 05:01:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 132 KiB/s wr, 12 op/s Feb 20 05:01:43 localhost sshd[326209]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:01:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 05:01:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 05:01:44 localhost podman[326211]: 2026-02-20 10:01:44.456273715 +0000 UTC m=+0.094451351 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-type=git, release=1770267347, config_id=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.expose-services=, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers) Feb 20 05:01:44 localhost podman[326211]: 2026-02-20 10:01:44.471827622 +0000 UTC m=+0.110005288 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, vendor=Red Hat, Inc., version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Feb 20 05:01:44 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 05:01:44 localhost podman[326212]: 2026-02-20 10:01:44.558592315 +0000 UTC m=+0.192778704 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Feb 20 05:01:44 localhost podman[326212]: 2026-02-20 10:01:44.596791013 +0000 UTC m=+0.230977362 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Feb 20 05:01:44 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 05:01:44 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "snap_name": "62a01af2-9c69-4d90-af43-120db3783b58", "format": "json"}]: dispatch Feb 20 05:01:44 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:62a01af2-9c69-4d90-af43-120db3783b58, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, vol_name:cephfs) < "" Feb 20 05:01:44 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:62a01af2-9c69-4d90-af43-120db3783b58, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, vol_name:cephfs) < "" Feb 20 05:01:45 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:01:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:01:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:01:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:01:45 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID tempest-cephx-id-408485567 with tenant ad1dd0d43ec7421b9a4a06c39e3657c6 Feb 20 05:01:45 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f/82f72010-739f-40e2-832b-6d6c7666a8fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:01:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f/82f72010-739f-40e2-832b-6d6c7666a8fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:01:45 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f/82f72010-739f-40e2-832b-6d6c7666a8fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:01:45 localhost nova_compute[280804]: 2026-02-20 10:01:45.128 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:01:45 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:01:45 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f/82f72010-739f-40e2-832b-6d6c7666a8fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:01:45 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f/82f72010-739f-40e2-832b-6d6c7666a8fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:01:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 167 KiB/s wr, 17 op/s Feb 20 05:01:45 localhost nova_compute[280804]: 2026-02-20 10:01:45.695 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:46 localhost podman[241347]: time="2026-02-20T10:01:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 05:01:46 localhost podman[241347]: @ - - [20/Feb/2026:10:01:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 05:01:46 localhost podman[241347]: @ - - [20/Feb/2026:10:01:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18828 "" "Go-http-client/1.1" Feb 20 05:01:46 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58028d35-31fb-4b5c-a80b-11eaf91192f8", "format": "json"}]: dispatch Feb 20 05:01:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:58028d35-31fb-4b5c-a80b-11eaf91192f8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:58028d35-31fb-4b5c-a80b-11eaf91192f8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:46 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58028d35-31fb-4b5c-a80b-11eaf91192f8' of type subvolume Feb 20 05:01:46 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:46.181+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58028d35-31fb-4b5c-a80b-11eaf91192f8' of type subvolume Feb 20 05:01:46 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58028d35-31fb-4b5c-a80b-11eaf91192f8", "force": true, "format": "json"}]: dispatch Feb 20 05:01:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58028d35-31fb-4b5c-a80b-11eaf91192f8, vol_name:cephfs) < "" Feb 20 05:01:46 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/58028d35-31fb-4b5c-a80b-11eaf91192f8'' moved to trashcan Feb 20 05:01:46 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58028d35-31fb-4b5c-a80b-11eaf91192f8, vol_name:cephfs) < "" Feb 20 05:01:47 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8782d0aa-7804-4d19-9c02-5e9c58554cbc", "format": "json"}]: dispatch Feb 20 05:01:47 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8782d0aa-7804-4d19-9c02-5e9c58554cbc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:47 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8782d0aa-7804-4d19-9c02-5e9c58554cbc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:47 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:47.006+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8782d0aa-7804-4d19-9c02-5e9c58554cbc' of type subvolume Feb 20 05:01:47 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8782d0aa-7804-4d19-9c02-5e9c58554cbc' of type subvolume Feb 20 05:01:47 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8782d0aa-7804-4d19-9c02-5e9c58554cbc", "force": true, "format": "json"}]: dispatch Feb 20 05:01:47 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8782d0aa-7804-4d19-9c02-5e9c58554cbc, vol_name:cephfs) < "" Feb 20 05:01:47 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8782d0aa-7804-4d19-9c02-5e9c58554cbc'' moved to trashcan Feb 20 05:01:47 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:47 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8782d0aa-7804-4d19-9c02-5e9c58554cbc, vol_name:cephfs) < "" Feb 20 05:01:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 148 KiB/s wr, 15 op/s Feb 20 05:01:48 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "snap_name": "62a01af2-9c69-4d90-af43-120db3783b58", "target_sub_name": "e59c51a9-964d-4ac9-9a67-d46c3cec7b52", "format": "json"}]: dispatch Feb 20 05:01:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:62a01af2-9c69-4d90-af43-120db3783b58, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, target_sub_name:e59c51a9-964d-4ac9-9a67-d46c3cec7b52, vol_name:cephfs) < "" Feb 20 05:01:48 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/.meta.tmp' Feb 20 05:01:48 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/.meta.tmp' to config b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/.meta' Feb 20 05:01:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:01:48 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.clone_index] tracking-id fc345f95-1f59-44c2-a8bb-7a1b1c6d5fef for path b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52' Feb 20 05:01:48 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta.tmp' Feb 20 05:01:48 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta.tmp' to config b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta' Feb 20 05:01:48 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:62a01af2-9c69-4d90-af43-120db3783b58, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, target_sub_name:e59c51a9-964d-4ac9-9a67-d46c3cec7b52, vol_name:cephfs) < "" Feb 20 05:01:48 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52 Feb 20 05:01:48 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, e59c51a9-964d-4ac9-9a67-d46c3cec7b52) Feb 20 05:01:48 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e59c51a9-964d-4ac9-9a67-d46c3cec7b52", "format": "json"}]: dispatch Feb 20 05:01:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e59c51a9-964d-4ac9-9a67-d46c3cec7b52, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v570: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 921 B/s rd, 126 KiB/s wr, 13 op/s Feb 20 05:01:50 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 05:01:50 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 05:01:50 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 05:01:50 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 05:01:50 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 05:01:50 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:01:50 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 93bf42b8-bdc0-497e-8dd2-f88b3c40509e (Updating node-proxy deployment (+3 -> 3)) Feb 20 05:01:50 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 93bf42b8-bdc0-497e-8dd2-f88b3c40509e (Updating node-proxy deployment (+3 -> 3)) Feb 20 05:01:50 localhost ceph-mgr[286565]: [progress INFO root] Completed event 93bf42b8-bdc0-497e-8dd2-f88b3c40509e (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 05:01:50 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 05:01:50 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 05:01:50 localhost nova_compute[280804]: 2026-02-20 10:01:50.129 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:50 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 05:01:50 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:01:50 localhost nova_compute[280804]: 2026-02-20 10:01:50.722 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 99 KiB/s wr, 10 op/s Feb 20 05:01:52 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, e59c51a9-964d-4ac9-9a67-d46c3cec7b52) -- by 0 seconds Feb 20 05:01:52 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/.meta.tmp' Feb 20 05:01:52 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/.meta.tmp' to config b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/.meta' Feb 20 05:01:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e59c51a9-964d-4ac9-9a67-d46c3cec7b52, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:52 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ca81ba90-4b7a-404f-aec6-b73bd51b8d77", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:52 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:ca81ba90-4b7a-404f-aec6-b73bd51b8d77, vol_name:cephfs) < "" Feb 20 05:01:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.snap/62a01af2-9c69-4d90-af43-120db3783b58/640a0067-d78e-4fe5-be4b-01c52437e87e' to b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/830eafdd-3297-4c73-ac03-4eb6aed8ccad' Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ca81ba90-4b7a-404f-aec6-b73bd51b8d77/.meta.tmp' Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ca81ba90-4b7a-404f-aec6-b73bd51b8d77/.meta.tmp' to config b'/volumes/_nogroup/ca81ba90-4b7a-404f-aec6-b73bd51b8d77/.meta' Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:ca81ba90-4b7a-404f-aec6-b73bd51b8d77, vol_name:cephfs) < "" Feb 20 05:01:53 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ca81ba90-4b7a-404f-aec6-b73bd51b8d77", "format": "json"}]: dispatch Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ca81ba90-4b7a-404f-aec6-b73bd51b8d77, vol_name:cephfs) < "" Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/.meta.tmp' Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/.meta.tmp' to config b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/.meta' Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ca81ba90-4b7a-404f-aec6-b73bd51b8d77, vol_name:cephfs) < "" Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.clone_index] untracking fc345f95-1f59-44c2-a8bb-7a1b1c6d5fef Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta.tmp' Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta.tmp' to config b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta' Feb 20 05:01:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 970 B/s rd, 94 KiB/s wr, 10 op/s Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/.meta.tmp' Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/.meta.tmp' to config b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52/.meta' Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, e59c51a9-964d-4ac9-9a67-d46c3cec7b52) Feb 20 05:01:53 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 05:01:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 05:01:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:01:53 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:01:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, vol_name:cephfs) < "" Feb 20 05:01:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:01:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:01:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} v 0) Feb 20 05:01:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:01:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, vol_name:cephfs) < "" Feb 20 05:01:54 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, vol_name:cephfs) < "" Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-408485567, client_metadata.root=/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f/82f72010-739f-40e2-832b-6d6c7666a8fb Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, vol_name:cephfs) < "" Feb 20 05:01:54 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "format": "json"}]: dispatch Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:54 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:01:54 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:01:54 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:01:54 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:54 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:54.229+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2e325a34-23a3-4cc0-9c2e-4df32d08d36f' of type subvolume Feb 20 05:01:54 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2e325a34-23a3-4cc0-9c2e-4df32d08d36f' of type subvolume Feb 20 05:01:54 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "force": true, "format": "json"}]: dispatch Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, vol_name:cephfs) < "" Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f'' moved to trashcan Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2e325a34-23a3-4cc0-9c2e-4df32d08d36f, vol_name:cephfs) < "" Feb 20 05:01:54 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, vol_name:cephfs) < "" Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c/.meta.tmp' Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c/.meta.tmp' to config b'/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c/.meta' Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, vol_name:cephfs) < "" Feb 20 05:01:54 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "format": "json"}]: dispatch Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, vol_name:cephfs) < "" Feb 20 05:01:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, vol_name:cephfs) < "" Feb 20 05:01:55 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e53: np0005625202.arwxwo(active, since 13m), standbys: np0005625203.lonygy, np0005625204.exgrzx Feb 20 05:01:55 localhost nova_compute[280804]: 2026-02-20 10:01:55.133 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 05:01:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 05:01:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 05:01:55 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ca81ba90-4b7a-404f-aec6-b73bd51b8d77", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch Feb 20 05:01:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:ca81ba90-4b7a-404f-aec6-b73bd51b8d77, vol_name:cephfs) < "" Feb 20 05:01:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:ca81ba90-4b7a-404f-aec6-b73bd51b8d77, vol_name:cephfs) < "" Feb 20 05:01:55 localhost systemd[1]: tmp-crun.vS95C5.mount: Deactivated successfully. Feb 20 05:01:55 localhost podman[326336]: 2026-02-20 10:01:55.466696248 +0000 UTC m=+0.102563988 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Feb 20 05:01:55 localhost podman[326338]: 2026-02-20 10:01:55.503128408 +0000 UTC m=+0.135188046 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Feb 20 05:01:55 localhost podman[326338]: 2026-02-20 10:01:55.510787854 +0000 UTC m=+0.142847492 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Feb 20 05:01:55 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 05:01:55 localhost podman[326336]: 2026-02-20 10:01:55.553827171 +0000 UTC m=+0.189694951 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible) Feb 20 05:01:55 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 05:01:55 localhost podman[326337]: 2026-02-20 10:01:55.557374557 +0000 UTC m=+0.193311239 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 05:01:55 localhost podman[326337]: 2026-02-20 10:01:55.643834671 +0000 UTC m=+0.279771303 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 05:01:55 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 05:01:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v573: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 134 KiB/s wr, 13 op/s Feb 20 05:01:55 localhost nova_compute[280804]: 2026-02-20 10:01:55.742 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:01:55 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "817a644a-a040-452f-9ef0-baf961087441", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:817a644a-a040-452f-9ef0-baf961087441, vol_name:cephfs) < "" Feb 20 05:01:56 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/817a644a-a040-452f-9ef0-baf961087441/.meta.tmp' Feb 20 05:01:56 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/817a644a-a040-452f-9ef0-baf961087441/.meta.tmp' to config b'/volumes/_nogroup/817a644a-a040-452f-9ef0-baf961087441/.meta' Feb 20 05:01:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:817a644a-a040-452f-9ef0-baf961087441, vol_name:cephfs) < "" Feb 20 05:01:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "817a644a-a040-452f-9ef0-baf961087441", "format": "json"}]: dispatch Feb 20 05:01:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:817a644a-a040-452f-9ef0-baf961087441, vol_name:cephfs) < "" Feb 20 05:01:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:817a644a-a040-452f-9ef0-baf961087441, vol_name:cephfs) < "" Feb 20 05:01:57 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b164674c-a82b-4878-a588-09120b66d1e5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:57 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b164674c-a82b-4878-a588-09120b66d1e5, vol_name:cephfs) < "" Feb 20 05:01:57 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5/.meta.tmp' Feb 20 05:01:57 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5/.meta.tmp' to config b'/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5/.meta' Feb 20 05:01:57 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b164674c-a82b-4878-a588-09120b66d1e5, vol_name:cephfs) < "" Feb 20 05:01:57 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b164674c-a82b-4878-a588-09120b66d1e5", "format": "json"}]: dispatch Feb 20 05:01:57 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b164674c-a82b-4878-a588-09120b66d1e5, vol_name:cephfs) < "" Feb 20 05:01:57 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b164674c-a82b-4878-a588-09120b66d1e5, vol_name:cephfs) < "" Feb 20 05:01:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 95 KiB/s wr, 9 op/s Feb 20 05:01:57 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "auth_id": "tempest-cephx-id-1559371662", "tenant_id": "e2c7618200d34da3a2f64f252dae7492", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:01:57 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1559371662, format:json, prefix:fs subvolume authorize, sub_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, tenant_id:e2c7618200d34da3a2f64f252dae7492, vol_name:cephfs) < "" Feb 20 05:01:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1559371662", "format": "json"} v 0) Feb 20 05:01:57 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1559371662", "format": "json"} : dispatch Feb 20 05:01:57 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID tempest-cephx-id-1559371662 with tenant e2c7618200d34da3a2f64f252dae7492 Feb 20 05:01:57 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1559371662", "caps": ["mds", "allow rw path=/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c/790d1933-deae-4553-8b28-0d1b6f0fc428", "osd", "allow rw pool=manila_data namespace=fsvolumens_50918518-d5fc-4598-a9e4-c7aeadda4e5c", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:01:57 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1559371662", "caps": ["mds", "allow rw path=/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c/790d1933-deae-4553-8b28-0d1b6f0fc428", "osd", "allow rw pool=manila_data namespace=fsvolumens_50918518-d5fc-4598-a9e4-c7aeadda4e5c", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:01:57 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1559371662", "caps": ["mds", "allow rw path=/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c/790d1933-deae-4553-8b28-0d1b6f0fc428", "osd", "allow rw pool=manila_data namespace=fsvolumens_50918518-d5fc-4598-a9e4-c7aeadda4e5c", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:01:57 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1559371662, format:json, prefix:fs subvolume authorize, sub_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, tenant_id:e2c7618200d34da3a2f64f252dae7492, vol_name:cephfs) < "" Feb 20 05:01:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:01:58 localhost openstack_network_exporter[243776]: ERROR 10:01:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 05:01:58 localhost openstack_network_exporter[243776]: Feb 20 05:01:58 localhost openstack_network_exporter[243776]: ERROR 10:01:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 05:01:58 localhost openstack_network_exporter[243776]: Feb 20 05:01:58 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "auth_id": "tempest-cephx-id-1559371662", "format": "json"}]: dispatch Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1559371662, format:json, prefix:fs subvolume deauthorize, sub_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, vol_name:cephfs) < "" Feb 20 05:01:58 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1559371662", "format": "json"} : dispatch Feb 20 05:01:58 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1559371662", "caps": ["mds", "allow rw path=/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c/790d1933-deae-4553-8b28-0d1b6f0fc428", "osd", "allow rw pool=manila_data namespace=fsvolumens_50918518-d5fc-4598-a9e4-c7aeadda4e5c", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:01:58 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1559371662", "caps": ["mds", "allow rw path=/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c/790d1933-deae-4553-8b28-0d1b6f0fc428", "osd", "allow rw pool=manila_data namespace=fsvolumens_50918518-d5fc-4598-a9e4-c7aeadda4e5c", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:01:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1559371662", "format": "json"} v 0) Feb 20 05:01:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1559371662", "format": "json"} : dispatch Feb 20 05:01:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1559371662"} v 0) Feb 20 05:01:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1559371662"} : dispatch Feb 20 05:01:58 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1559371662"}]': finished Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1559371662, format:json, prefix:fs subvolume deauthorize, sub_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, vol_name:cephfs) < "" Feb 20 05:01:58 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "auth_id": "tempest-cephx-id-1559371662", "format": "json"}]: dispatch Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1559371662, format:json, prefix:fs subvolume evict, sub_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, vol_name:cephfs) < "" Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1559371662, client_metadata.root=/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c/790d1933-deae-4553-8b28-0d1b6f0fc428 Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1559371662, format:json, prefix:fs subvolume evict, sub_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, vol_name:cephfs) < "" Feb 20 05:01:58 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "format": "json"}]: dispatch Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:58.517+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '50918518-d5fc-4598-a9e4-c7aeadda4e5c' of type subvolume Feb 20 05:01:58 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '50918518-d5fc-4598-a9e4-c7aeadda4e5c' of type subvolume Feb 20 05:01:58 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "force": true, "format": "json"}]: dispatch Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, vol_name:cephfs) < "" Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c'' moved to trashcan Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:50918518-d5fc-4598-a9e4-c7aeadda4e5c, vol_name:cephfs) < "" Feb 20 05:01:58 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ca81ba90-4b7a-404f-aec6-b73bd51b8d77", "format": "json"}]: dispatch Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ca81ba90-4b7a-404f-aec6-b73bd51b8d77, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ca81ba90-4b7a-404f-aec6-b73bd51b8d77, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:01:58 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:01:58.599+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ca81ba90-4b7a-404f-aec6-b73bd51b8d77' of type subvolume Feb 20 05:01:58 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ca81ba90-4b7a-404f-aec6-b73bd51b8d77' of type subvolume Feb 20 05:01:58 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ca81ba90-4b7a-404f-aec6-b73bd51b8d77", "force": true, "format": "json"}]: dispatch Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ca81ba90-4b7a-404f-aec6-b73bd51b8d77, vol_name:cephfs) < "" Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ca81ba90-4b7a-404f-aec6-b73bd51b8d77'' moved to trashcan Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:01:58 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ca81ba90-4b7a-404f-aec6-b73bd51b8d77, vol_name:cephfs) < "" Feb 20 05:01:59 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ac8c96bc-6c82-492e-a472-5bf05b6575ac", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:01:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ac8c96bc-6c82-492e-a472-5bf05b6575ac, vol_name:cephfs) < "" Feb 20 05:01:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ac8c96bc-6c82-492e-a472-5bf05b6575ac/.meta.tmp' Feb 20 05:01:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ac8c96bc-6c82-492e-a472-5bf05b6575ac/.meta.tmp' to config b'/volumes/_nogroup/ac8c96bc-6c82-492e-a472-5bf05b6575ac/.meta' Feb 20 05:01:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ac8c96bc-6c82-492e-a472-5bf05b6575ac, vol_name:cephfs) < "" Feb 20 05:01:59 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ac8c96bc-6c82-492e-a472-5bf05b6575ac", "format": "json"}]: dispatch Feb 20 05:01:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ac8c96bc-6c82-492e-a472-5bf05b6575ac, vol_name:cephfs) < "" Feb 20 05:01:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ac8c96bc-6c82-492e-a472-5bf05b6575ac, vol_name:cephfs) < "" Feb 20 05:01:59 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1559371662", "format": "json"} : dispatch Feb 20 05:01:59 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1559371662"} : dispatch Feb 20 05:01:59 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1559371662"}]': finished Feb 20 05:01:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 95 KiB/s wr, 9 op/s Feb 20 05:02:00 localhost nova_compute[280804]: 2026-02-20 10:02:00.135 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:00 localhost nova_compute[280804]: 2026-02-20 10:02:00.745 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:00 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b164674c-a82b-4878-a588-09120b66d1e5", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:02:00 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:b164674c-a82b-4878-a588-09120b66d1e5, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:02:00 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:02:00 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:00 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID tempest-cephx-id-408485567 with tenant ad1dd0d43ec7421b9a4a06c39e3657c6 Feb 20 05:02:00 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5/72089254-fcf7-474f-aaf6-ba53e49ce9b2", "osd", "allow rw pool=manila_data namespace=fsvolumens_b164674c-a82b-4878-a588-09120b66d1e5", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:02:00 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5/72089254-fcf7-474f-aaf6-ba53e49ce9b2", "osd", "allow rw pool=manila_data namespace=fsvolumens_b164674c-a82b-4878-a588-09120b66d1e5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:02:00 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5/72089254-fcf7-474f-aaf6-ba53e49ce9b2", "osd", "allow rw pool=manila_data namespace=fsvolumens_b164674c-a82b-4878-a588-09120b66d1e5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:02:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:b164674c-a82b-4878-a588-09120b66d1e5, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:02:01 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:01 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5/72089254-fcf7-474f-aaf6-ba53e49ce9b2", "osd", "allow rw pool=manila_data namespace=fsvolumens_b164674c-a82b-4878-a588-09120b66d1e5", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:02:01 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5/72089254-fcf7-474f-aaf6-ba53e49ce9b2", "osd", "allow rw pool=manila_data namespace=fsvolumens_b164674c-a82b-4878-a588-09120b66d1e5", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:02:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 162 KiB/s wr, 15 op/s Feb 20 05:02:01 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8e58eb7a-11d1-42b2-ad27-1874f562bebd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:02:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8e58eb7a-11d1-42b2-ad27-1874f562bebd, vol_name:cephfs) < "" Feb 20 05:02:01 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8e58eb7a-11d1-42b2-ad27-1874f562bebd/.meta.tmp' Feb 20 05:02:01 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8e58eb7a-11d1-42b2-ad27-1874f562bebd/.meta.tmp' to config b'/volumes/_nogroup/8e58eb7a-11d1-42b2-ad27-1874f562bebd/.meta' Feb 20 05:02:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8e58eb7a-11d1-42b2-ad27-1874f562bebd, vol_name:cephfs) < "" Feb 20 05:02:01 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8e58eb7a-11d1-42b2-ad27-1874f562bebd", "format": "json"}]: dispatch Feb 20 05:02:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8e58eb7a-11d1-42b2-ad27-1874f562bebd, vol_name:cephfs) < "" Feb 20 05:02:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8e58eb7a-11d1-42b2-ad27-1874f562bebd, vol_name:cephfs) < "" Feb 20 05:02:02 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ac8c96bc-6c82-492e-a472-5bf05b6575ac", "format": "json"}]: dispatch Feb 20 05:02:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ac8c96bc-6c82-492e-a472-5bf05b6575ac, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ac8c96bc-6c82-492e-a472-5bf05b6575ac, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:02 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:02.466+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ac8c96bc-6c82-492e-a472-5bf05b6575ac' of type subvolume Feb 20 05:02:02 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ac8c96bc-6c82-492e-a472-5bf05b6575ac' of type subvolume Feb 20 05:02:02 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ac8c96bc-6c82-492e-a472-5bf05b6575ac", "force": true, "format": "json"}]: dispatch Feb 20 05:02:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ac8c96bc-6c82-492e-a472-5bf05b6575ac, vol_name:cephfs) < "" Feb 20 05:02:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ac8c96bc-6c82-492e-a472-5bf05b6575ac'' moved to trashcan Feb 20 05:02:02 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:02 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ac8c96bc-6c82-492e-a472-5bf05b6575ac, vol_name:cephfs) < "" Feb 20 05:02:02 localhost ovn_metadata_agent[161761]: 2026-02-20 10:02:02.579 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 05:02:02 localhost nova_compute[280804]: 2026-02-20 10:02:02.579 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:02 localhost ovn_metadata_agent[161761]: 2026-02-20 10:02:02.580 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 05:02:03 localhost sshd[326405]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:02:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:02:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 118 KiB/s wr, 11 op/s Feb 20 05:02:04 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b164674c-a82b-4878-a588-09120b66d1e5", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:02:04 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:b164674c-a82b-4878-a588-09120b66d1e5, vol_name:cephfs) < "" Feb 20 05:02:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:02:04 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:04 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} v 0) Feb 20 05:02:04 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:02:04 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:02:04 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:b164674c-a82b-4878-a588-09120b66d1e5, vol_name:cephfs) < "" Feb 20 05:02:04 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b164674c-a82b-4878-a588-09120b66d1e5", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:02:04 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:b164674c-a82b-4878-a588-09120b66d1e5, vol_name:cephfs) < "" Feb 20 05:02:04 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-408485567, client_metadata.root=/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5/72089254-fcf7-474f-aaf6-ba53e49ce9b2 Feb 20 05:02:04 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:02:04 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:b164674c-a82b-4878-a588-09120b66d1e5, vol_name:cephfs) < "" Feb 20 05:02:04 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:04 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:02:04 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:02:04 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b164674c-a82b-4878-a588-09120b66d1e5", "format": "json"}]: dispatch Feb 20 05:02:04 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b164674c-a82b-4878-a588-09120b66d1e5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:04 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b164674c-a82b-4878-a588-09120b66d1e5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:04 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b164674c-a82b-4878-a588-09120b66d1e5' of type subvolume Feb 20 05:02:04 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:04.393+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b164674c-a82b-4878-a588-09120b66d1e5' of type subvolume Feb 20 05:02:04 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b164674c-a82b-4878-a588-09120b66d1e5", "force": true, "format": "json"}]: dispatch Feb 20 05:02:04 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b164674c-a82b-4878-a588-09120b66d1e5, vol_name:cephfs) < "" Feb 20 05:02:04 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5'' moved to trashcan Feb 20 05:02:04 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:04 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b164674c-a82b-4878-a588-09120b66d1e5, vol_name:cephfs) < "" Feb 20 05:02:04 localhost nova_compute[280804]: 2026-02-20 10:02:04.524 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:02:04 localhost nova_compute[280804]: 2026-02-20 10:02:04.524 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 05:02:04 localhost nova_compute[280804]: 2026-02-20 10:02:04.525 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 05:02:04 localhost nova_compute[280804]: 2026-02-20 10:02:04.550 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 05:02:04 localhost nova_compute[280804]: 2026-02-20 10:02:04.551 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:02:04 localhost nova_compute[280804]: 2026-02-20 10:02:04.551 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:02:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 05:02:04 localhost podman[326408]: 2026-02-20 10:02:04.801129902 +0000 UTC m=+0.085558532 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 05:02:04 localhost podman[326408]: 2026-02-20 10:02:04.814779809 +0000 UTC m=+0.099208459 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter) Feb 20 05:02:04 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 05:02:05 localhost nova_compute[280804]: 2026-02-20 10:02:05.178 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, vol_name:cephfs) < "" Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75/.meta.tmp' Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75/.meta.tmp' to config b'/volumes/_nogroup/e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75/.meta' Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, vol_name:cephfs) < "" Feb 20 05:02:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "format": "json"}]: dispatch Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, vol_name:cephfs) < "" Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, vol_name:cephfs) < "" Feb 20 05:02:05 localhost nova_compute[280804]: 2026-02-20 10:02:05.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:02:05 localhost nova_compute[280804]: 2026-02-20 10:02:05.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 05:02:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8e58eb7a-11d1-42b2-ad27-1874f562bebd", "format": "json"}]: dispatch Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8e58eb7a-11d1-42b2-ad27-1874f562bebd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8e58eb7a-11d1-42b2-ad27-1874f562bebd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:05 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:05.667+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8e58eb7a-11d1-42b2-ad27-1874f562bebd' of type subvolume Feb 20 05:02:05 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8e58eb7a-11d1-42b2-ad27-1874f562bebd' of type subvolume Feb 20 05:02:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8e58eb7a-11d1-42b2-ad27-1874f562bebd", "force": true, "format": "json"}]: dispatch Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8e58eb7a-11d1-42b2-ad27-1874f562bebd, vol_name:cephfs) < "" Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8e58eb7a-11d1-42b2-ad27-1874f562bebd'' moved to trashcan Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8e58eb7a-11d1-42b2-ad27-1874f562bebd, vol_name:cephfs) < "" Feb 20 05:02:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v578: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 164 KiB/s wr, 16 op/s Feb 20 05:02:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "817a644a-a040-452f-9ef0-baf961087441", "format": "json"}]: dispatch Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:817a644a-a040-452f-9ef0-baf961087441, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:817a644a-a040-452f-9ef0-baf961087441, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:05 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:05.716+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '817a644a-a040-452f-9ef0-baf961087441' of type subvolume Feb 20 05:02:05 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '817a644a-a040-452f-9ef0-baf961087441' of type subvolume Feb 20 05:02:05 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "817a644a-a040-452f-9ef0-baf961087441", "force": true, "format": "json"}]: dispatch Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:817a644a-a040-452f-9ef0-baf961087441, vol_name:cephfs) < "" Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/817a644a-a040-452f-9ef0-baf961087441'' moved to trashcan Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:05 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:817a644a-a040-452f-9ef0-baf961087441, vol_name:cephfs) < "" Feb 20 05:02:05 localhost nova_compute[280804]: 2026-02-20 10:02:05.747 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:02:05.926 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:02:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:02:05.927 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:02:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:02:05.927 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:02:07 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:02:07 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:02:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:02:07 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:07 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID tempest-cephx-id-408485567 with tenant ad1dd0d43ec7421b9a4a06c39e3657c6 Feb 20 05:02:07 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:02:07 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:02:07 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:02:07 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:02:07 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:07 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:02:07 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:02:07 localhost nova_compute[280804]: 2026-02-20 10:02:07.506 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:02:07 localhost nova_compute[280804]: 2026-02-20 10:02:07.520 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:02:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v579: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 112 KiB/s wr, 11 op/s Feb 20 05:02:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:02:08 localhost nova_compute[280804]: 2026-02-20 10:02:08.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:02:08 localhost nova_compute[280804]: 2026-02-20 10:02:08.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:02:08 localhost nova_compute[280804]: 2026-02-20 10:02:08.531 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:02:08 localhost nova_compute[280804]: 2026-02-20 10:02:08.531 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:02:08 localhost nova_compute[280804]: 2026-02-20 10:02:08.532 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:02:08 localhost nova_compute[280804]: 2026-02-20 10:02:08.532 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 05:02:08 localhost nova_compute[280804]: 2026-02-20 10:02:08.532 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:02:08 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "snap_name": "acd1b19e-a73f-46df-b23c-a5b5d955cb9c", "format": "json"}]: dispatch Feb 20 05:02:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:acd1b19e-a73f-46df-b23c-a5b5d955cb9c, sub_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, vol_name:cephfs) < "" Feb 20 05:02:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:acd1b19e-a73f-46df-b23c-a5b5d955cb9c, sub_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, vol_name:cephfs) < "" Feb 20 05:02:08 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "d1f2e6a9-4e51-4824-ae46-7f0ab0897ac3", "mode": "0755", "format": "json"}]: dispatch Feb 20 05:02:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:d1f2e6a9-4e51-4824-ae46-7f0ab0897ac3, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Feb 20 05:02:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:d1f2e6a9-4e51-4824-ae46-7f0ab0897ac3, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Feb 20 05:02:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:02:08 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2191019720' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:02:08 localhost nova_compute[280804]: 2026-02-20 10:02:08.926 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:02:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "97e63579-f59d-4812-9af1-a8d227932ace", "format": "json"}]: dispatch Feb 20 05:02:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:97e63579-f59d-4812-9af1-a8d227932ace, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:97e63579-f59d-4812-9af1-a8d227932ace, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:09 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "97e63579-f59d-4812-9af1-a8d227932ace", "force": true, "format": "json"}]: dispatch Feb 20 05:02:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:97e63579-f59d-4812-9af1-a8d227932ace, vol_name:cephfs) < "" Feb 20 05:02:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/97e63579-f59d-4812-9af1-a8d227932ace'' moved to trashcan Feb 20 05:02:09 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:09 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:97e63579-f59d-4812-9af1-a8d227932ace, vol_name:cephfs) < "" Feb 20 05:02:09 localhost nova_compute[280804]: 2026-02-20 10:02:09.138 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 05:02:09 localhost nova_compute[280804]: 2026-02-20 10:02:09.139 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11363MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 05:02:09 localhost nova_compute[280804]: 2026-02-20 10:02:09.140 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:02:09 localhost nova_compute[280804]: 2026-02-20 10:02:09.140 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:02:09 localhost nova_compute[280804]: 2026-02-20 10:02:09.195 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 05:02:09 localhost nova_compute[280804]: 2026-02-20 10:02:09.195 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 05:02:09 localhost nova_compute[280804]: 2026-02-20 10:02:09.213 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:02:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:02:09 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1875780044' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:02:09 localhost nova_compute[280804]: 2026-02-20 10:02:09.666 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:02:09 localhost nova_compute[280804]: 2026-02-20 10:02:09.673 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 05:02:09 localhost nova_compute[280804]: 2026-02-20 10:02:09.687 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 05:02:09 localhost nova_compute[280804]: 2026-02-20 10:02:09.690 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 05:02:09 localhost nova_compute[280804]: 2026-02-20 10:02:09.690 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:02:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 112 KiB/s wr, 11 op/s Feb 20 05:02:10 localhost nova_compute[280804]: 2026-02-20 10:02:10.179 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:10 localhost nova_compute[280804]: 2026-02-20 10:02:10.749 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:11 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:02:11 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:11 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:02:11 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:11 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} v 0) Feb 20 05:02:11 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:02:11 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:02:11 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:11 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:02:11 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:11 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-408485567, client_metadata.root=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db Feb 20 05:02:11 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:02:11 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:11 localhost nova_compute[280804]: 2026-02-20 10:02:11.690 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:02:11 localhost nova_compute[280804]: 2026-02-20 10:02:11.691 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:02:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v581: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 164 KiB/s wr, 16 op/s Feb 20 05:02:11 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "d1f2e6a9-4e51-4824-ae46-7f0ab0897ac3", "force": true, "format": "json"}]: dispatch Feb 20 05:02:11 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:d1f2e6a9-4e51-4824-ae46-7f0ab0897ac3, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Feb 20 05:02:11 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:d1f2e6a9-4e51-4824-ae46-7f0ab0897ac3, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Feb 20 05:02:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "snap_name": "ba024a11-8c4d-4adf-9f1b-141c894e0dc3_0b43dd8b-5f2b-4576-9ac7-b968c8ab5c96", "force": true, "format": "json"}]: dispatch Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ba024a11-8c4d-4adf-9f1b-141c894e0dc3_0b43dd8b-5f2b-4576-9ac7-b968c8ab5c96, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, vol_name:cephfs) < "" Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta.tmp' Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta.tmp' to config b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta' Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ba024a11-8c4d-4adf-9f1b-141c894e0dc3_0b43dd8b-5f2b-4576-9ac7-b968c8ab5c96, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, vol_name:cephfs) < "" Feb 20 05:02:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "snap_name": "ba024a11-8c4d-4adf-9f1b-141c894e0dc3", "force": true, "format": "json"}]: dispatch Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ba024a11-8c4d-4adf-9f1b-141c894e0dc3, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, vol_name:cephfs) < "" Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta.tmp' Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta.tmp' to config b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066/.meta' Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ba024a11-8c4d-4adf-9f1b-141c894e0dc3, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, vol_name:cephfs) < "" Feb 20 05:02:12 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:12 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:02:12 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:02:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "snap_name": "acd1b19e-a73f-46df-b23c-a5b5d955cb9c_200d92f6-c0eb-4bdf-bc03-a206af0c82a7", "force": true, "format": "json"}]: dispatch Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:acd1b19e-a73f-46df-b23c-a5b5d955cb9c_200d92f6-c0eb-4bdf-bc03-a206af0c82a7, sub_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, vol_name:cephfs) < "" Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75/.meta.tmp' Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75/.meta.tmp' to config b'/volumes/_nogroup/e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75/.meta' Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:acd1b19e-a73f-46df-b23c-a5b5d955cb9c_200d92f6-c0eb-4bdf-bc03-a206af0c82a7, sub_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, vol_name:cephfs) < "" Feb 20 05:02:12 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "snap_name": "acd1b19e-a73f-46df-b23c-a5b5d955cb9c", "force": true, "format": "json"}]: dispatch Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:acd1b19e-a73f-46df-b23c-a5b5d955cb9c, sub_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, vol_name:cephfs) < "" Feb 20 05:02:12 localhost ovn_metadata_agent[161761]: 2026-02-20 10:02:12.582 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75/.meta.tmp' Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75/.meta.tmp' to config b'/volumes/_nogroup/e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75/.meta' Feb 20 05:02:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:acd1b19e-a73f-46df-b23c-a5b5d955cb9c, sub_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, vol_name:cephfs) < "" Feb 20 05:02:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:02:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 938 B/s rd, 97 KiB/s wr, 10 op/s Feb 20 05:02:14 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:02:14 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:02:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:02:14 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:14 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID tempest-cephx-id-408485567 with tenant ad1dd0d43ec7421b9a4a06c39e3657c6 Feb 20 05:02:14 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:02:14 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:02:14 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:02:14 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:02:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "f80fe8d3-ac9e-4618-9a8e-1111bb6ccdac", "mode": "0755", "format": "json"}]: dispatch Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:f80fe8d3-ac9e-4618-9a8e-1111bb6ccdac, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:f80fe8d3-ac9e-4618-9a8e-1111bb6ccdac, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Feb 20 05:02:15 localhost nova_compute[280804]: 2026-02-20 10:02:15.181 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:15 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:15 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:02:15 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:02:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 05:02:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 05:02:15 localhost podman[326478]: 2026-02-20 10:02:15.450576086 +0000 UTC m=+0.082650284 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, config_id=openstack_network_exporter, io.openshift.expose-services=) Feb 20 05:02:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "format": "json"}]: dispatch Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5df751b6-6f21-4d96-be03-f1dd0a841066, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:15 localhost podman[326478]: 2026-02-20 10:02:15.463687058 +0000 UTC m=+0.095761246 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, release=1770267347, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, distribution-scope=public) Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5df751b6-6f21-4d96-be03-f1dd0a841066, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:15 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:15.466+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5df751b6-6f21-4d96-be03-f1dd0a841066' of type subvolume Feb 20 05:02:15 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5df751b6-6f21-4d96-be03-f1dd0a841066' of type subvolume Feb 20 05:02:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "force": true, "format": "json"}]: dispatch Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, vol_name:cephfs) < "" Feb 20 05:02:15 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5df751b6-6f21-4d96-be03-f1dd0a841066'' moved to trashcan Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5df751b6-6f21-4d96-be03-f1dd0a841066, vol_name:cephfs) < "" Feb 20 05:02:15 localhost podman[326479]: 2026-02-20 10:02:15.556109253 +0000 UTC m=+0.184746349 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true) Feb 20 05:02:15 localhost podman[326479]: 2026-02-20 10:02:15.567104279 +0000 UTC m=+0.195741395 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 05:02:15 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 05:02:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "format": "json"}]: dispatch Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:15 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75' of type subvolume Feb 20 05:02:15 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:15.626+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75' of type subvolume Feb 20 05:02:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "force": true, "format": "json"}]: dispatch Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, vol_name:cephfs) < "" Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75'' moved to trashcan Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75, vol_name:cephfs) < "" Feb 20 05:02:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 143 KiB/s wr, 16 op/s Feb 20 05:02:15 localhost nova_compute[280804]: 2026-02-20 10:02:15.752 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:16 localhost podman[241347]: time="2026-02-20T10:02:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 05:02:16 localhost podman[241347]: @ - - [20/Feb/2026:10:02:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 05:02:16 localhost podman[241347]: @ - - [20/Feb/2026:10:02:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18828 "" "Go-http-client/1.1" Feb 20 05:02:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e255 do_prune osdmap full prune enabled Feb 20 05:02:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e256 e256: 6 total, 6 up, 6 in Feb 20 05:02:17 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e256: 6 total, 6 up, 6 in Feb 20 05:02:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 117 KiB/s wr, 13 op/s Feb 20 05:02:18 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:02:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:02:18 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} v 0) Feb 20 05:02:18 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:02:18 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:02:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:02:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:18 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:02:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:18 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-408485567, client_metadata.root=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db Feb 20 05:02:18 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:02:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:18 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "f80fe8d3-ac9e-4618-9a8e-1111bb6ccdac", "force": true, "format": "json"}]: dispatch Feb 20 05:02:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:f80fe8d3-ac9e-4618-9a8e-1111bb6ccdac, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Feb 20 05:02:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:f80fe8d3-ac9e-4618-9a8e-1111bb6ccdac, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Feb 20 05:02:18 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:18 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:02:18 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:02:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 117 KiB/s wr, 13 op/s Feb 20 05:02:20 localhost nova_compute[280804]: 2026-02-20 10:02:20.224 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:20 localhost nova_compute[280804]: 2026-02-20 10:02:20.756 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:21 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:02:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:02:21 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:02:21 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:21 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID tempest-cephx-id-408485567 with tenant ad1dd0d43ec7421b9a4a06c39e3657c6 Feb 20 05:02:21 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:02:21 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:02:21 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:02:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:02:21 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "af681d3a-0a9b-41df-acfc-a86a81e0ae12", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:02:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af681d3a-0a9b-41df-acfc-a86a81e0ae12, vol_name:cephfs) < "" Feb 20 05:02:21 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af681d3a-0a9b-41df-acfc-a86a81e0ae12/.meta.tmp' Feb 20 05:02:21 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af681d3a-0a9b-41df-acfc-a86a81e0ae12/.meta.tmp' to config b'/volumes/_nogroup/af681d3a-0a9b-41df-acfc-a86a81e0ae12/.meta' Feb 20 05:02:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af681d3a-0a9b-41df-acfc-a86a81e0ae12, vol_name:cephfs) < "" Feb 20 05:02:21 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af681d3a-0a9b-41df-acfc-a86a81e0ae12", "format": "json"}]: dispatch Feb 20 05:02:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af681d3a-0a9b-41df-acfc-a86a81e0ae12, vol_name:cephfs) < "" Feb 20 05:02:21 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af681d3a-0a9b-41df-acfc-a86a81e0ae12, vol_name:cephfs) < "" Feb 20 05:02:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 99 KiB/s wr, 11 op/s Feb 20 05:02:22 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:22 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:02:22 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:02:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:02:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e256 do_prune osdmap full prune enabled Feb 20 05:02:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e257 e257: 6 total, 6 up, 6 in Feb 20 05:02:23 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e257: 6 total, 6 up, 6 in Feb 20 05:02:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_10:02:23 Feb 20 05:02:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 05:02:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 05:02:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['vms', 'backups', 'images', 'volumes', 'manila_data', '.mgr', 'manila_metadata'] Feb 20 05:02:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 05:02:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:02:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:02:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:02:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:02:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:02:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:02:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 55 KiB/s wr, 5 op/s Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.635783082077052e-06 of space, bias 1.0, pg target 0.0003255208333333333 quantized to 32 (current 32) Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:02:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.001482019472361809 of space, bias 4.0, pg target 1.1796875 quantized to 16 (current 16) Feb 20 05:02:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 05:02:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 05:02:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 05:02:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 05:02:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 05:02:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 05:02:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 05:02:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 05:02:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 05:02:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 05:02:24 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:02:24.086 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:02:23Z, description=, device_id=33ec31b2-fecf-477f-8148-61437b8399e4, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8f5912ac-3826-4c97-b827-f971f7075f50, ip_allocation=immediate, mac_address=fa:16:3e:c7:54:a1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3722, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T10:02:23Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 05:02:24 localhost podman[326534]: 2026-02-20 10:02:24.32954491 +0000 UTC m=+0.057801194 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 05:02:24 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 05:02:24 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:02:24 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:02:24 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:02:24.514 263745 INFO neutron.agent.dhcp.agent [None req-16f36697-2086-4a5b-b5b2-fde8afd17ef7 - - - - - -] DHCP configuration for ports {'8f5912ac-3826-4c97-b827-f971f7075f50'} is completed#033[00m Feb 20 05:02:24 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:02:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:24 localhost nova_compute[280804]: 2026-02-20 10:02:24.770 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:24 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:02:24 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:24 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} v 0) Feb 20 05:02:24 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:02:24 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:02:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:24 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:02:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:24 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-408485567, client_metadata.root=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db Feb 20 05:02:24 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:02:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:24 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af681d3a-0a9b-41df-acfc-a86a81e0ae12", "format": "json"}]: dispatch Feb 20 05:02:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:af681d3a-0a9b-41df-acfc-a86a81e0ae12, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:24 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:af681d3a-0a9b-41df-acfc-a86a81e0ae12, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:24 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:24.996+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'af681d3a-0a9b-41df-acfc-a86a81e0ae12' of type subvolume Feb 20 05:02:24 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'af681d3a-0a9b-41df-acfc-a86a81e0ae12' of type subvolume Feb 20 05:02:25 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af681d3a-0a9b-41df-acfc-a86a81e0ae12", "force": true, "format": "json"}]: dispatch Feb 20 05:02:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af681d3a-0a9b-41df-acfc-a86a81e0ae12, vol_name:cephfs) < "" Feb 20 05:02:25 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/af681d3a-0a9b-41df-acfc-a86a81e0ae12'' moved to trashcan Feb 20 05:02:25 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af681d3a-0a9b-41df-acfc-a86a81e0ae12, vol_name:cephfs) < "" Feb 20 05:02:25 localhost nova_compute[280804]: 2026-02-20 10:02:25.264 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:25 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:25 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:02:25 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:02:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 726 B/s rd, 111 KiB/s wr, 11 op/s Feb 20 05:02:25 localhost nova_compute[280804]: 2026-02-20 10:02:25.759 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 05:02:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 05:02:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 05:02:26 localhost systemd[1]: tmp-crun.VCvQkE.mount: Deactivated successfully. Feb 20 05:02:26 localhost podman[326557]: 2026-02-20 10:02:26.454183802 +0000 UTC m=+0.094212514 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 05:02:26 localhost podman[326557]: 2026-02-20 10:02:26.493999133 +0000 UTC m=+0.134027895 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible) Feb 20 05:02:26 localhost podman[326558]: 2026-02-20 10:02:26.500898328 +0000 UTC m=+0.137819497 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 05:02:26 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 05:02:26 localhost podman[326558]: 2026-02-20 10:02:26.536731322 +0000 UTC m=+0.173652491 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 05:02:26 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 05:02:26 localhost podman[326559]: 2026-02-20 10:02:26.553864882 +0000 UTC m=+0.184538993 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 05:02:26 localhost podman[326559]: 2026-02-20 10:02:26.56082082 +0000 UTC m=+0.191494961 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Feb 20 05:02:26 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:02:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:26 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 05:02:26 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:02:26 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:02:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:26 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "format": "json"}]: dispatch Feb 20 05:02:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:26 localhost nova_compute[280804]: 2026-02-20 10:02:26.905 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v591: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 93 KiB/s wr, 9 op/s Feb 20 05:02:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:02:28 localhost openstack_network_exporter[243776]: ERROR 10:02:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 05:02:28 localhost openstack_network_exporter[243776]: Feb 20 05:02:28 localhost openstack_network_exporter[243776]: ERROR 10:02:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 05:02:28 localhost openstack_network_exporter[243776]: Feb 20 05:02:28 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "99dd1946-0a71-4d2e-9886-17bc70e2e74a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:02:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:99dd1946-0a71-4d2e-9886-17bc70e2e74a, vol_name:cephfs) < "" Feb 20 05:02:28 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/99dd1946-0a71-4d2e-9886-17bc70e2e74a/.meta.tmp' Feb 20 05:02:28 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/99dd1946-0a71-4d2e-9886-17bc70e2e74a/.meta.tmp' to config b'/volumes/_nogroup/99dd1946-0a71-4d2e-9886-17bc70e2e74a/.meta' Feb 20 05:02:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:99dd1946-0a71-4d2e-9886-17bc70e2e74a, vol_name:cephfs) < "" Feb 20 05:02:28 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "99dd1946-0a71-4d2e-9886-17bc70e2e74a", "format": "json"}]: dispatch Feb 20 05:02:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:99dd1946-0a71-4d2e-9886-17bc70e2e74a, vol_name:cephfs) < "" Feb 20 05:02:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:99dd1946-0a71-4d2e-9886-17bc70e2e74a, vol_name:cephfs) < "" Feb 20 05:02:28 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch Feb 20 05:02:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:02:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:02:28 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:28 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: Creating meta for ID tempest-cephx-id-408485567 with tenant ad1dd0d43ec7421b9a4a06c39e3657c6 Feb 20 05:02:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} v 0) Feb 20 05:02:28 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:02:28 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:02:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume authorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, tenant_id:ad1dd0d43ec7421b9a4a06c39e3657c6, vol_name:cephfs) < "" Feb 20 05:02:29 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:29 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} : dispatch Feb 20 05:02:29 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"}]': finished Feb 20 05:02:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 94 KiB/s wr, 9 op/s Feb 20 05:02:29 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "73ef4b22-cb69-44e4-9b94-352c732420be", "format": "json"}]: dispatch Feb 20 05:02:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:73ef4b22-cb69-44e4-9b94-352c732420be, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:29 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:73ef4b22-cb69-44e4-9b94-352c732420be, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:30 localhost nova_compute[280804]: 2026-02-20 10:02:30.266 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:30 localhost nova_compute[280804]: 2026-02-20 10:02:30.762 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:31 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "99dd1946-0a71-4d2e-9886-17bc70e2e74a", "format": "json"}]: dispatch Feb 20 05:02:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:99dd1946-0a71-4d2e-9886-17bc70e2e74a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:99dd1946-0a71-4d2e-9886-17bc70e2e74a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:31 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:31.480+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '99dd1946-0a71-4d2e-9886-17bc70e2e74a' of type subvolume Feb 20 05:02:31 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '99dd1946-0a71-4d2e-9886-17bc70e2e74a' of type subvolume Feb 20 05:02:31 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "99dd1946-0a71-4d2e-9886-17bc70e2e74a", "force": true, "format": "json"}]: dispatch Feb 20 05:02:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:99dd1946-0a71-4d2e-9886-17bc70e2e74a, vol_name:cephfs) < "" Feb 20 05:02:31 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/99dd1946-0a71-4d2e-9886-17bc70e2e74a'' moved to trashcan Feb 20 05:02:31 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:99dd1946-0a71-4d2e-9886-17bc70e2e74a, vol_name:cephfs) < "" Feb 20 05:02:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 108 KiB/s wr, 10 op/s Feb 20 05:02:31 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:02:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:31 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} v 0) Feb 20 05:02:31 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:31 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} v 0) Feb 20 05:02:31 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:02:31 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:02:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume deauthorize, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:31 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch Feb 20 05:02:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:31 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-408485567, client_metadata.root=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db Feb 20 05:02:31 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Feb 20 05:02:31 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-408485567, format:json, prefix:fs subvolume evict, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:32 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch Feb 20 05:02:32 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch Feb 20 05:02:32 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished Feb 20 05:02:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "1e66955e-3f1a-40d4-80db-e506894b4fe7", "format": "json"}]: dispatch Feb 20 05:02:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1e66955e-3f1a-40d4-80db-e506894b4fe7, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1e66955e-3f1a-40d4-80db-e506894b4fe7, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:02:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 388 B/s rd, 103 KiB/s wr, 9 op/s Feb 20 05:02:33 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:02:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, vol_name:cephfs) < "" Feb 20 05:02:34 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/61e1c482-9c73-4bad-8fd8-0dc280a18a86/.meta.tmp' Feb 20 05:02:34 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/61e1c482-9c73-4bad-8fd8-0dc280a18a86/.meta.tmp' to config b'/volumes/_nogroup/61e1c482-9c73-4bad-8fd8-0dc280a18a86/.meta' Feb 20 05:02:34 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, vol_name:cephfs) < "" Feb 20 05:02:34 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "format": "json"}]: dispatch Feb 20 05:02:34 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, vol_name:cephfs) < "" Feb 20 05:02:34 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, vol_name:cephfs) < "" Feb 20 05:02:35 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "format": "json"}]: dispatch Feb 20 05:02:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:35 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:35.303+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cd0ea541-924c-41c6-95b4-11e7d85bd173' of type subvolume Feb 20 05:02:35 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cd0ea541-924c-41c6-95b4-11e7d85bd173' of type subvolume Feb 20 05:02:35 localhost nova_compute[280804]: 2026-02-20 10:02:35.305 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:35 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "force": true, "format": "json"}]: dispatch Feb 20 05:02:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:35 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173'' moved to trashcan Feb 20 05:02:35 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cd0ea541-924c-41c6-95b4-11e7d85bd173, vol_name:cephfs) < "" Feb 20 05:02:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 05:02:35 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 05:02:35 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:02:35 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:02:35 localhost podman[326638]: 2026-02-20 10:02:35.395951088 +0000 UTC m=+0.072039418 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 20 05:02:35 localhost nova_compute[280804]: 2026-02-20 10:02:35.414 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:35 localhost systemd[1]: tmp-crun.FObLEG.mount: Deactivated successfully. Feb 20 05:02:35 localhost podman[326649]: 2026-02-20 10:02:35.44290486 +0000 UTC m=+0.080026603 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 05:02:35 localhost podman[326649]: 2026-02-20 10:02:35.477287235 +0000 UTC m=+0.114408968 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 05:02:35 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 05:02:35 localhost sshd[326683]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:02:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 125 KiB/s wr, 12 op/s Feb 20 05:02:35 localhost nova_compute[280804]: 2026-02-20 10:02:35.763 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:37 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "snap_name": "8b9be1a2-e688-4f67-bcb5-d692465b7436", "format": "json"}]: dispatch Feb 20 05:02:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8b9be1a2-e688-4f67-bcb5-d692465b7436, sub_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, vol_name:cephfs) < "" Feb 20 05:02:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8b9be1a2-e688-4f67-bcb5-d692465b7436, sub_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, vol_name:cephfs) < "" Feb 20 05:02:37 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "1e66955e-3f1a-40d4-80db-e506894b4fe7_5d64b720-4a43-4716-92a1-dde09c619e84", "force": true, "format": "json"}]: dispatch Feb 20 05:02:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1e66955e-3f1a-40d4-80db-e506894b4fe7_5d64b720-4a43-4716-92a1-dde09c619e84, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:02:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:02:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1e66955e-3f1a-40d4-80db-e506894b4fe7_5d64b720-4a43-4716-92a1-dde09c619e84, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:37 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "1e66955e-3f1a-40d4-80db-e506894b4fe7", "force": true, "format": "json"}]: dispatch Feb 20 05:02:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1e66955e-3f1a-40d4-80db-e506894b4fe7, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:02:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:02:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1e66955e-3f1a-40d4-80db-e506894b4fe7, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 84 KiB/s wr, 7 op/s Feb 20 05:02:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:02:38 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e59c51a9-964d-4ac9-9a67-d46c3cec7b52", "format": "json"}]: dispatch Feb 20 05:02:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e59c51a9-964d-4ac9-9a67-d46c3cec7b52, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 84 KiB/s wr, 8 op/s Feb 20 05:02:40 localhost nova_compute[280804]: 2026-02-20 10:02:40.306 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:40 localhost nova_compute[280804]: 2026-02-20 10:02:40.765 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 110 KiB/s wr, 10 op/s Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e59c51a9-964d-4ac9-9a67-d46c3cec7b52, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e59c51a9-964d-4ac9-9a67-d46c3cec7b52", "format": "json"}]: dispatch Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e59c51a9-964d-4ac9-9a67-d46c3cec7b52, vol_name:cephfs) < "" Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e59c51a9-964d-4ac9-9a67-d46c3cec7b52, vol_name:cephfs) < "" Feb 20 05:02:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "55bcf7f1-01f4-42f0-8f90-e7ef14438392", "format": "json"}]: dispatch Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:55bcf7f1-01f4-42f0-8f90-e7ef14438392, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:55bcf7f1-01f4-42f0-8f90-e7ef14438392, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "snap_name": "8b9be1a2-e688-4f67-bcb5-d692465b7436_5acfc6c8-db4e-4899-8b99-eb2dcdeecf08", "force": true, "format": "json"}]: dispatch Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8b9be1a2-e688-4f67-bcb5-d692465b7436_5acfc6c8-db4e-4899-8b99-eb2dcdeecf08, sub_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, vol_name:cephfs) < "" Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/61e1c482-9c73-4bad-8fd8-0dc280a18a86/.meta.tmp' Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/61e1c482-9c73-4bad-8fd8-0dc280a18a86/.meta.tmp' to config b'/volumes/_nogroup/61e1c482-9c73-4bad-8fd8-0dc280a18a86/.meta' Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8b9be1a2-e688-4f67-bcb5-d692465b7436_5acfc6c8-db4e-4899-8b99-eb2dcdeecf08, sub_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, vol_name:cephfs) < "" Feb 20 05:02:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "snap_name": "8b9be1a2-e688-4f67-bcb5-d692465b7436", "force": true, "format": "json"}]: dispatch Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8b9be1a2-e688-4f67-bcb5-d692465b7436, sub_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, vol_name:cephfs) < "" Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/61e1c482-9c73-4bad-8fd8-0dc280a18a86/.meta.tmp' Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/61e1c482-9c73-4bad-8fd8-0dc280a18a86/.meta.tmp' to config b'/volumes/_nogroup/61e1c482-9c73-4bad-8fd8-0dc280a18a86/.meta' Feb 20 05:02:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8b9be1a2-e688-4f67-bcb5-d692465b7436, sub_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, vol_name:cephfs) < "" Feb 20 05:02:43 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:02:43.030 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:02:42Z, description=, device_id=e3518fc5-5c1f-48e2-8acd-06c1ed21b401, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fcb6ace2-6354-4820-a655-222060844b8b, ip_allocation=immediate, mac_address=fa:16:3e:9a:ed:4a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3781, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T10:02:42Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 05:02:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:02:43 localhost podman[326701]: 2026-02-20 10:02:43.260501197 +0000 UTC m=+0.057227789 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 05:02:43 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 05:02:43 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:02:43 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:02:43 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e59c51a9-964d-4ac9-9a67-d46c3cec7b52", "format": "json"}]: dispatch Feb 20 05:02:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e59c51a9-964d-4ac9-9a67-d46c3cec7b52, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e59c51a9-964d-4ac9-9a67-d46c3cec7b52, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:43 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e59c51a9-964d-4ac9-9a67-d46c3cec7b52", "force": true, "format": "json"}]: dispatch Feb 20 05:02:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e59c51a9-964d-4ac9-9a67-d46c3cec7b52, vol_name:cephfs) < "" Feb 20 05:02:43 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e59c51a9-964d-4ac9-9a67-d46c3cec7b52'' moved to trashcan Feb 20 05:02:43 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e59c51a9-964d-4ac9-9a67-d46c3cec7b52, vol_name:cephfs) < "" Feb 20 05:02:43 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:02:43.534 263745 INFO neutron.agent.dhcp.agent [None req-61028d55-bc3e-4649-af78-f648390e84a1 - - - - - -] DHCP configuration for ports {'fcb6ace2-6354-4820-a655-222060844b8b'} is completed#033[00m Feb 20 05:02:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 61 KiB/s wr, 6 op/s Feb 20 05:02:43 localhost nova_compute[280804]: 2026-02-20 10:02:43.855 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:43 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "format": "json"}]: dispatch Feb 20 05:02:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:43 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:43.931+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '61e1c482-9c73-4bad-8fd8-0dc280a18a86' of type subvolume Feb 20 05:02:43 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '61e1c482-9c73-4bad-8fd8-0dc280a18a86' of type subvolume Feb 20 05:02:43 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "force": true, "format": "json"}]: dispatch Feb 20 05:02:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, vol_name:cephfs) < "" Feb 20 05:02:43 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/61e1c482-9c73-4bad-8fd8-0dc280a18a86'' moved to trashcan Feb 20 05:02:43 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:43 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:61e1c482-9c73-4bad-8fd8-0dc280a18a86, vol_name:cephfs) < "" Feb 20 05:02:45 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "55bcf7f1-01f4-42f0-8f90-e7ef14438392_264ca94b-ea48-4597-a2c2-a0904eedca44", "force": true, "format": "json"}]: dispatch Feb 20 05:02:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:55bcf7f1-01f4-42f0-8f90-e7ef14438392_264ca94b-ea48-4597-a2c2-a0904eedca44, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:45 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:02:45 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:02:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:55bcf7f1-01f4-42f0-8f90-e7ef14438392_264ca94b-ea48-4597-a2c2-a0904eedca44, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:45 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "55bcf7f1-01f4-42f0-8f90-e7ef14438392", "force": true, "format": "json"}]: dispatch Feb 20 05:02:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:55bcf7f1-01f4-42f0-8f90-e7ef14438392, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:45 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:02:45 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:02:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:55bcf7f1-01f4-42f0-8f90-e7ef14438392, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:45 localhost nova_compute[280804]: 2026-02-20 10:02:45.307 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 97 KiB/s wr, 8 op/s Feb 20 05:02:45 localhost nova_compute[280804]: 2026-02-20 10:02:45.768 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:46 localhost podman[241347]: time="2026-02-20T10:02:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 05:02:46 localhost podman[241347]: @ - - [20/Feb/2026:10:02:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 05:02:46 localhost podman[241347]: @ - - [20/Feb/2026:10:02:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18829 "" "Go-http-client/1.1" Feb 20 05:02:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 05:02:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 05:02:46 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4d276c40-22b5-4463-ba95-b271179ed697", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4d276c40-22b5-4463-ba95-b271179ed697, vol_name:cephfs) < "" Feb 20 05:02:46 localhost podman[326722]: 2026-02-20 10:02:46.463799534 +0000 UTC m=+0.092225031 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Feb 20 05:02:46 localhost podman[326722]: 2026-02-20 10:02:46.473980178 +0000 UTC m=+0.102405675 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 20 05:02:46 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 05:02:46 localhost nova_compute[280804]: 2026-02-20 10:02:46.563 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:46 localhost podman[326721]: 2026-02-20 10:02:46.564999686 +0000 UTC m=+0.197623945 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git) Feb 20 05:02:46 localhost podman[326721]: 2026-02-20 10:02:46.580991756 +0000 UTC m=+0.213616035 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, managed_by=edpm_ansible, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c) Feb 20 05:02:46 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4d276c40-22b5-4463-ba95-b271179ed697/.meta.tmp' Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4d276c40-22b5-4463-ba95-b271179ed697/.meta.tmp' to config b'/volumes/_nogroup/4d276c40-22b5-4463-ba95-b271179ed697/.meta' Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4d276c40-22b5-4463-ba95-b271179ed697, vol_name:cephfs) < "" Feb 20 05:02:46 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4d276c40-22b5-4463-ba95-b271179ed697", "format": "json"}]: dispatch Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4d276c40-22b5-4463-ba95-b271179ed697, vol_name:cephfs) < "" Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4d276c40-22b5-4463-ba95-b271179ed697, vol_name:cephfs) < "" Feb 20 05:02:46 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "snap_name": "62a01af2-9c69-4d90-af43-120db3783b58_ffe8c3f4-7f8d-4e50-9ffc-1b3c43840e07", "force": true, "format": "json"}]: dispatch Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:62a01af2-9c69-4d90-af43-120db3783b58_ffe8c3f4-7f8d-4e50-9ffc-1b3c43840e07, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, vol_name:cephfs) < "" Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta.tmp' Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta.tmp' to config b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta' Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:62a01af2-9c69-4d90-af43-120db3783b58_ffe8c3f4-7f8d-4e50-9ffc-1b3c43840e07, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, vol_name:cephfs) < "" Feb 20 05:02:46 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "snap_name": "62a01af2-9c69-4d90-af43-120db3783b58", "force": true, "format": "json"}]: dispatch Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:62a01af2-9c69-4d90-af43-120db3783b58, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, vol_name:cephfs) < "" Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta.tmp' Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta.tmp' to config b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6/.meta' Feb 20 05:02:46 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:62a01af2-9c69-4d90-af43-120db3783b58, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, vol_name:cephfs) < "" Feb 20 05:02:47 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:02:47 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e03d5299-b683-42d5-861f-4ae99be1be37, vol_name:cephfs) < "" Feb 20 05:02:47 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e03d5299-b683-42d5-861f-4ae99be1be37/.meta.tmp' Feb 20 05:02:47 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e03d5299-b683-42d5-861f-4ae99be1be37/.meta.tmp' to config b'/volumes/_nogroup/e03d5299-b683-42d5-861f-4ae99be1be37/.meta' Feb 20 05:02:47 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e03d5299-b683-42d5-861f-4ae99be1be37, vol_name:cephfs) < "" Feb 20 05:02:47 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "format": "json"}]: dispatch Feb 20 05:02:47 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e03d5299-b683-42d5-861f-4ae99be1be37, vol_name:cephfs) < "" Feb 20 05:02:47 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e03d5299-b683-42d5-861f-4ae99be1be37, vol_name:cephfs) < "" Feb 20 05:02:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e257 do_prune osdmap full prune enabled Feb 20 05:02:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e258 e258: 6 total, 6 up, 6 in Feb 20 05:02:47 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e258: 6 total, 6 up, 6 in Feb 20 05:02:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 74 KiB/s wr, 6 op/s Feb 20 05:02:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:02:48 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "88e0b4cc-0c15-4363-8336-4c11c356e179", "format": "json"}]: dispatch Feb 20 05:02:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:88e0b4cc-0c15-4363-8336-4c11c356e179, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:48 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:88e0b4cc-0c15-4363-8336-4c11c356e179, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e258 do_prune osdmap full prune enabled Feb 20 05:02:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e259 e259: 6 total, 6 up, 6 in Feb 20 05:02:48 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e259: 6 total, 6 up, 6 in Feb 20 05:02:49 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4d276c40-22b5-4463-ba95-b271179ed697", "snap_name": "22d02fe9-b677-4d4d-ab93-02062be24947", "format": "json"}]: dispatch Feb 20 05:02:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:22d02fe9-b677-4d4d-ab93-02062be24947, sub_name:4d276c40-22b5-4463-ba95-b271179ed697, vol_name:cephfs) < "" Feb 20 05:02:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:22d02fe9-b677-4d4d-ab93-02062be24947, sub_name:4d276c40-22b5-4463-ba95-b271179ed697, vol_name:cephfs) < "" Feb 20 05:02:49 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "format": "json"}]: dispatch Feb 20 05:02:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:49 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:49.697+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5f3b4204-930b-4bc2-9cff-7e982286eac6' of type subvolume Feb 20 05:02:49 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5f3b4204-930b-4bc2-9cff-7e982286eac6' of type subvolume Feb 20 05:02:49 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "force": true, "format": "json"}]: dispatch Feb 20 05:02:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, vol_name:cephfs) < "" Feb 20 05:02:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 53 KiB/s wr, 4 op/s Feb 20 05:02:49 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5f3b4204-930b-4bc2-9cff-7e982286eac6'' moved to trashcan Feb 20 05:02:49 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:49 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5f3b4204-930b-4bc2-9cff-7e982286eac6, vol_name:cephfs) < "" Feb 20 05:02:50 localhost nova_compute[280804]: 2026-02-20 10:02:50.339 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:50 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "snap_name": "f9be8701-f7e0-4195-b63f-b542fb246c4d", "format": "json"}]: dispatch Feb 20 05:02:50 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f9be8701-f7e0-4195-b63f-b542fb246c4d, sub_name:e03d5299-b683-42d5-861f-4ae99be1be37, vol_name:cephfs) < "" Feb 20 05:02:50 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f9be8701-f7e0-4195-b63f-b542fb246c4d, sub_name:e03d5299-b683-42d5-861f-4ae99be1be37, vol_name:cephfs) < "" Feb 20 05:02:50 localhost nova_compute[280804]: 2026-02-20 10:02:50.771 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 05:02:51 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 05:02:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 05:02:51 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 05:02:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 05:02:51 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:02:51 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 86984e8d-a53a-4dc6-8375-d9dfe2a9b4b2 (Updating node-proxy deployment (+3 -> 3)) Feb 20 05:02:51 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 86984e8d-a53a-4dc6-8375-d9dfe2a9b4b2 (Updating node-proxy deployment (+3 -> 3)) Feb 20 05:02:51 localhost ceph-mgr[286565]: [progress INFO root] Completed event 86984e8d-a53a-4dc6-8375-d9dfe2a9b4b2 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 05:02:51 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 05:02:51 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 05:02:51 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 05:02:51 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:02:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 143 KiB/s wr, 12 op/s Feb 20 05:02:51 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "88e0b4cc-0c15-4363-8336-4c11c356e179_b7815798-7284-4b52-8029-4e2d096c3965", "force": true, "format": "json"}]: dispatch Feb 20 05:02:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:88e0b4cc-0c15-4363-8336-4c11c356e179_b7815798-7284-4b52-8029-4e2d096c3965, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:51 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:02:51 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:02:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:88e0b4cc-0c15-4363-8336-4c11c356e179_b7815798-7284-4b52-8029-4e2d096c3965, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:51 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "88e0b4cc-0c15-4363-8336-4c11c356e179", "force": true, "format": "json"}]: dispatch Feb 20 05:02:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:88e0b4cc-0c15-4363-8336-4c11c356e179, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:51 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:02:51 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:02:51 localhost sshd[326845]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:02:51 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:88e0b4cc-0c15-4363-8336-4c11c356e179, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:52 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Feb 20 05:02:52 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 5584 writes, 38K keys, 5584 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.05 MB/s#012Cumulative WAL: 5584 writes, 5584 syncs, 1.00 writes per sync, written: 0.06 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2514 writes, 11K keys, 2514 commit groups, 1.0 writes per commit group, ingest: 12.25 MB, 0.02 MB/s#012Interval WAL: 2514 writes, 2514 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 150.8 0.31 0.12 19 0.016 0 0 0.0 0.0#012 L6 1/0 17.21 MB 0.0 0.3 0.0 0.3 0.3 0.0 0.0 6.5 200.4 182.6 1.68 0.82 18 0.093 221K 9365 0.0 0.0#012 Sum 1/0 17.21 MB 0.0 0.3 0.0 0.3 0.3 0.1 0.0 7.5 168.9 177.6 1.99 0.94 37 0.054 221K 9365 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 13.3 186.6 187.2 0.71 0.37 14 0.050 94K 3755 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.3 0.0 0.3 0.3 0.0 0.0 0.0 200.4 182.6 1.68 0.82 18 0.093 221K 9365 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 152.1 0.31 0.12 18 0.017 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.046, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.35 GB write, 0.29 MB/s write, 0.33 GB read, 0.28 MB/s read, 2.0 seconds#012Interval compaction: 0.13 GB write, 0.22 MB/s write, 0.13 GB read, 0.22 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a9b723b350#2 capacity: 304.00 MB usage: 58.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000361 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3772,57.08 MB,18.7751%) FilterBlock(37,646.61 KB,0.207715%) IndexBlock(37,846.58 KB,0.271953%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Feb 20 05:02:52 localhost nova_compute[280804]: 2026-02-20 10:02:52.257 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:52 localhost podman[326863]: 2026-02-20 10:02:52.27360913 +0000 UTC m=+0.053445458 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 05:02:52 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 05:02:52 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:02:52 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:02:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:02:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e259 do_prune osdmap full prune enabled Feb 20 05:02:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e260 e260: 6 total, 6 up, 6 in Feb 20 05:02:53 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4d276c40-22b5-4463-ba95-b271179ed697", "snap_name": "22d02fe9-b677-4d4d-ab93-02062be24947_54d3ff10-d9fa-4f52-83b0-fa0cfd9ed293", "force": true, "format": "json"}]: dispatch Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:22d02fe9-b677-4d4d-ab93-02062be24947_54d3ff10-d9fa-4f52-83b0-fa0cfd9ed293, sub_name:4d276c40-22b5-4463-ba95-b271179ed697, vol_name:cephfs) < "" Feb 20 05:02:53 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e260: 6 total, 6 up, 6 in Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4d276c40-22b5-4463-ba95-b271179ed697/.meta.tmp' Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4d276c40-22b5-4463-ba95-b271179ed697/.meta.tmp' to config b'/volumes/_nogroup/4d276c40-22b5-4463-ba95-b271179ed697/.meta' Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:22d02fe9-b677-4d4d-ab93-02062be24947_54d3ff10-d9fa-4f52-83b0-fa0cfd9ed293, sub_name:4d276c40-22b5-4463-ba95-b271179ed697, vol_name:cephfs) < "" Feb 20 05:02:53 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4d276c40-22b5-4463-ba95-b271179ed697", "snap_name": "22d02fe9-b677-4d4d-ab93-02062be24947", "force": true, "format": "json"}]: dispatch Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:22d02fe9-b677-4d4d-ab93-02062be24947, sub_name:4d276c40-22b5-4463-ba95-b271179ed697, vol_name:cephfs) < "" Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4d276c40-22b5-4463-ba95-b271179ed697/.meta.tmp' Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4d276c40-22b5-4463-ba95-b271179ed697/.meta.tmp' to config b'/volumes/_nogroup/4d276c40-22b5-4463-ba95-b271179ed697/.meta' Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:22d02fe9-b677-4d4d-ab93-02062be24947, sub_name:4d276c40-22b5-4463-ba95-b271179ed697, vol_name:cephfs) < "" Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Feb 20 05:02:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 20 05:02:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 119 KiB/s wr, 11 op/s Feb 20 05:02:53 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 05:02:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 05:02:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:02:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e260 do_prune osdmap full prune enabled Feb 20 05:02:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e261 e261: 6 total, 6 up, 6 in Feb 20 05:02:54 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e261: 6 total, 6 up, 6 in Feb 20 05:02:54 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "snap_name": "f9be8701-f7e0-4195-b63f-b542fb246c4d_982c66f3-6550-45d5-a41b-e698844f2247", "force": true, "format": "json"}]: dispatch Feb 20 05:02:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f9be8701-f7e0-4195-b63f-b542fb246c4d_982c66f3-6550-45d5-a41b-e698844f2247, sub_name:e03d5299-b683-42d5-861f-4ae99be1be37, vol_name:cephfs) < "" Feb 20 05:02:54 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:02:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e03d5299-b683-42d5-861f-4ae99be1be37/.meta.tmp' Feb 20 05:02:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e03d5299-b683-42d5-861f-4ae99be1be37/.meta.tmp' to config b'/volumes/_nogroup/e03d5299-b683-42d5-861f-4ae99be1be37/.meta' Feb 20 05:02:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f9be8701-f7e0-4195-b63f-b542fb246c4d_982c66f3-6550-45d5-a41b-e698844f2247, sub_name:e03d5299-b683-42d5-861f-4ae99be1be37, vol_name:cephfs) < "" Feb 20 05:02:54 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "snap_name": "f9be8701-f7e0-4195-b63f-b542fb246c4d", "force": true, "format": "json"}]: dispatch Feb 20 05:02:54 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f9be8701-f7e0-4195-b63f-b542fb246c4d, sub_name:e03d5299-b683-42d5-861f-4ae99be1be37, vol_name:cephfs) < "" Feb 20 05:02:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e03d5299-b683-42d5-861f-4ae99be1be37/.meta.tmp' Feb 20 05:02:54 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e03d5299-b683-42d5-861f-4ae99be1be37/.meta.tmp' to config b'/volumes/_nogroup/e03d5299-b683-42d5-861f-4ae99be1be37/.meta' Feb 20 05:02:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f9be8701-f7e0-4195-b63f-b542fb246c4d, sub_name:e03d5299-b683-42d5-861f-4ae99be1be37, vol_name:cephfs) < "" Feb 20 05:02:55 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e54: np0005625202.arwxwo(active, since 14m), standbys: np0005625203.lonygy, np0005625204.exgrzx Feb 20 05:02:55 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "faea6402-68f1-475f-9cea-5f24a4d2c2b9", "format": "json"}]: dispatch Feb 20 05:02:55 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:faea6402-68f1-475f-9cea-5f24a4d2c2b9, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:55 localhost nova_compute[280804]: 2026-02-20 10:02:55.383 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e261 do_prune osdmap full prune enabled Feb 20 05:02:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v609: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s rd, 177 KiB/s wr, 16 op/s Feb 20 05:02:55 localhost nova_compute[280804]: 2026-02-20 10:02:55.773 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:02:55 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e262 e262: 6 total, 6 up, 6 in Feb 20 05:02:55 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e262: 6 total, 6 up, 6 in Feb 20 05:02:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:faea6402-68f1-475f-9cea-5f24a4d2c2b9, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4d276c40-22b5-4463-ba95-b271179ed697", "format": "json"}]: dispatch Feb 20 05:02:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4d276c40-22b5-4463-ba95-b271179ed697, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4d276c40-22b5-4463-ba95-b271179ed697, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:56 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4d276c40-22b5-4463-ba95-b271179ed697' of type subvolume Feb 20 05:02:56 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:56.484+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4d276c40-22b5-4463-ba95-b271179ed697' of type subvolume Feb 20 05:02:56 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4d276c40-22b5-4463-ba95-b271179ed697", "force": true, "format": "json"}]: dispatch Feb 20 05:02:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4d276c40-22b5-4463-ba95-b271179ed697, vol_name:cephfs) < "" Feb 20 05:02:56 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4d276c40-22b5-4463-ba95-b271179ed697'' moved to trashcan Feb 20 05:02:56 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:56 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4d276c40-22b5-4463-ba95-b271179ed697, vol_name:cephfs) < "" Feb 20 05:02:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e262 do_prune osdmap full prune enabled Feb 20 05:02:56 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e263 e263: 6 total, 6 up, 6 in Feb 20 05:02:56 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e263: 6 total, 6 up, 6 in Feb 20 05:02:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 05:02:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 05:02:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 05:02:57 localhost systemd[1]: tmp-crun.vfcin1.mount: Deactivated successfully. Feb 20 05:02:57 localhost podman[326887]: 2026-02-20 10:02:57.476982139 +0000 UTC m=+0.103852224 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 05:02:57 localhost podman[326885]: 2026-02-20 10:02:57.516318087 +0000 UTC m=+0.148701570 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller) Feb 20 05:02:57 localhost podman[326887]: 2026-02-20 10:02:57.562257752 +0000 UTC m=+0.189127877 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 05:02:57 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "format": "json"}]: dispatch Feb 20 05:02:57 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e03d5299-b683-42d5-861f-4ae99be1be37, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:57 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 05:02:57 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e03d5299-b683-42d5-861f-4ae99be1be37, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:02:57 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:02:57.585+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e03d5299-b683-42d5-861f-4ae99be1be37' of type subvolume Feb 20 05:02:57 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e03d5299-b683-42d5-861f-4ae99be1be37' of type subvolume Feb 20 05:02:57 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "force": true, "format": "json"}]: dispatch Feb 20 05:02:57 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e03d5299-b683-42d5-861f-4ae99be1be37, vol_name:cephfs) < "" Feb 20 05:02:57 localhost podman[326885]: 2026-02-20 10:02:57.59529127 +0000 UTC m=+0.227674773 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Feb 20 05:02:57 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e03d5299-b683-42d5-861f-4ae99be1be37'' moved to trashcan Feb 20 05:02:57 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 05:02:57 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:02:57 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e03d5299-b683-42d5-861f-4ae99be1be37, vol_name:cephfs) < "" Feb 20 05:02:57 localhost podman[326886]: 2026-02-20 10:02:57.620217191 +0000 UTC m=+0.250379215 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 05:02:57 localhost podman[326886]: 2026-02-20 10:02:57.664569163 +0000 UTC m=+0.294731167 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 05:02:57 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 05:02:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 905 B/s rd, 115 KiB/s wr, 9 op/s Feb 20 05:02:58 localhost openstack_network_exporter[243776]: ERROR 10:02:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 05:02:58 localhost openstack_network_exporter[243776]: Feb 20 05:02:58 localhost openstack_network_exporter[243776]: ERROR 10:02:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 05:02:58 localhost openstack_network_exporter[243776]: Feb 20 05:02:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:02:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e263 do_prune osdmap full prune enabled Feb 20 05:02:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e264 e264: 6 total, 6 up, 6 in Feb 20 05:02:58 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e264: 6 total, 6 up, 6 in Feb 20 05:02:59 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "faea6402-68f1-475f-9cea-5f24a4d2c2b9_25b67a37-0482-4bde-b478-a143700074f7", "force": true, "format": "json"}]: dispatch Feb 20 05:02:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faea6402-68f1-475f-9cea-5f24a4d2c2b9_25b67a37-0482-4bde-b478-a143700074f7, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:02:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:02:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faea6402-68f1-475f-9cea-5f24a4d2c2b9_25b67a37-0482-4bde-b478-a143700074f7, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:59 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "faea6402-68f1-475f-9cea-5f24a4d2c2b9", "force": true, "format": "json"}]: dispatch Feb 20 05:02:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faea6402-68f1-475f-9cea-5f24a4d2c2b9, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:02:59 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:02:59 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faea6402-68f1-475f-9cea-5f24a4d2c2b9, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:02:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 777 B/s rd, 100 KiB/s wr, 9 op/s Feb 20 05:03:00 localhost sshd[326950]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0. Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.225515) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67 Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581780225562, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2345, "num_deletes": 259, "total_data_size": 2928668, "memory_usage": 2977920, "flush_reason": "Manual Compaction"} Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581780240802, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 2872263, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36180, "largest_seqno": 38524, "table_properties": {"data_size": 2861951, "index_size": 6369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 25539, "raw_average_key_size": 22, "raw_value_size": 2839917, "raw_average_value_size": 2478, "num_data_blocks": 273, "num_entries": 1146, "num_filter_entries": 1146, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581663, "oldest_key_time": 1771581663, "file_creation_time": 1771581780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}} Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 15334 microseconds, and 6886 cpu microseconds. Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.240848) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 2872263 bytes OK Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.240872) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.242751) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.242770) EVENT_LOG_v1 {"time_micros": 1771581780242764, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.242792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2918030, prev total WAL file size 2918030, number of live WAL files 2. Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.243577) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end) Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(2804KB)], [66(17MB)] Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581780243630, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 20918722, "oldest_snapshot_seqno": -1} Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 14307 keys, 19489730 bytes, temperature: kUnknown Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581780328568, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 19489730, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19406802, "index_size": 46091, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35781, "raw_key_size": 381407, "raw_average_key_size": 26, "raw_value_size": 19162766, "raw_average_value_size": 1339, "num_data_blocks": 1735, "num_entries": 14307, "num_filter_entries": 14307, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771581780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}} Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.328787) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 19489730 bytes Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.330173) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 246.1 rd, 229.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 17.2 +0.0 blob) out(18.6 +0.0 blob), read-write-amplify(14.1) write-amplify(6.8) OK, records in: 14848, records dropped: 541 output_compression: NoCompression Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.330191) EVENT_LOG_v1 {"time_micros": 1771581780330182, "job": 40, "event": "compaction_finished", "compaction_time_micros": 85015, "compaction_time_cpu_micros": 53963, "output_level": 6, "num_output_files": 1, "total_output_size": 19489730, "num_input_records": 14848, "num_output_records": 14307, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581780330642, "job": 40, "event": "table_file_deletion", "file_number": 68} Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581780332565, "job": 40, "event": "table_file_deletion", "file_number": 66} Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.243482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.332651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.332657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.332661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.332664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:03:00 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:00.332667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:03:00 localhost nova_compute[280804]: 2026-02-20 10:03:00.438 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:00 localhost nova_compute[280804]: 2026-02-20 10:03:00.776 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:01 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ccc69125-8271-465f-a7cf-99b18598188c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:03:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ccc69125-8271-465f-a7cf-99b18598188c, vol_name:cephfs) < "" Feb 20 05:03:01 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ccc69125-8271-465f-a7cf-99b18598188c/.meta.tmp' Feb 20 05:03:01 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ccc69125-8271-465f-a7cf-99b18598188c/.meta.tmp' to config b'/volumes/_nogroup/ccc69125-8271-465f-a7cf-99b18598188c/.meta' Feb 20 05:03:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ccc69125-8271-465f-a7cf-99b18598188c, vol_name:cephfs) < "" Feb 20 05:03:01 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ccc69125-8271-465f-a7cf-99b18598188c", "format": "json"}]: dispatch Feb 20 05:03:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ccc69125-8271-465f-a7cf-99b18598188c, vol_name:cephfs) < "" Feb 20 05:03:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 100 KiB/s wr, 7 op/s Feb 20 05:03:01 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ccc69125-8271-465f-a7cf-99b18598188c, vol_name:cephfs) < "" Feb 20 05:03:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 05:03:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/536207374' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 05:03:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 05:03:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/536207374' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 05:03:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e264 do_prune osdmap full prune enabled Feb 20 05:03:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e265 e265: 6 total, 6 up, 6 in Feb 20 05:03:02 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e265: 6 total, 6 up, 6 in Feb 20 05:03:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:03:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e265 do_prune osdmap full prune enabled Feb 20 05:03:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e266 e266: 6 total, 6 up, 6 in Feb 20 05:03:03 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e266: 6 total, 6 up, 6 in Feb 20 05:03:03 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "7729a724-86fe-4d43-a1c1-675788136bbb", "format": "json"}]: dispatch Feb 20 05:03:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7729a724-86fe-4d43-a1c1-675788136bbb, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:03:03 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7729a724-86fe-4d43-a1c1-675788136bbb, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:03:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 100 KiB/s wr, 7 op/s Feb 20 05:03:04 localhost nova_compute[280804]: 2026-02-20 10:03:04.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:03:04 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ccc69125-8271-465f-a7cf-99b18598188c", "snap_name": "7873ec63-b44a-47a7-8bbe-8f944c5b9a9d", "format": "json"}]: dispatch Feb 20 05:03:04 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7873ec63-b44a-47a7-8bbe-8f944c5b9a9d, sub_name:ccc69125-8271-465f-a7cf-99b18598188c, vol_name:cephfs) < "" Feb 20 05:03:04 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7873ec63-b44a-47a7-8bbe-8f944c5b9a9d, sub_name:ccc69125-8271-465f-a7cf-99b18598188c, vol_name:cephfs) < "" Feb 20 05:03:05 localhost nova_compute[280804]: 2026-02-20 10:03:05.469 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:05 localhost nova_compute[280804]: 2026-02-20 10:03:05.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:03:05 localhost nova_compute[280804]: 2026-02-20 10:03:05.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 05:03:05 localhost nova_compute[280804]: 2026-02-20 10:03:05.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 05:03:05 localhost nova_compute[280804]: 2026-02-20 10:03:05.523 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 05:03:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 124 KiB/s wr, 9 op/s Feb 20 05:03:05 localhost nova_compute[280804]: 2026-02-20 10:03:05.779 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:03:05.927 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:03:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:03:05.928 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:03:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:03:05.928 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:03:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 05:03:06 localhost podman[326952]: 2026-02-20 10:03:06.440862299 +0000 UTC m=+0.083608620 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 05:03:06 localhost podman[326952]: 2026-02-20 10:03:06.452979664 +0000 UTC m=+0.095725985 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 05:03:06 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 05:03:06 localhost nova_compute[280804]: 2026-02-20 10:03:06.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:03:06 localhost nova_compute[280804]: 2026-02-20 10:03:06.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:03:06 localhost nova_compute[280804]: 2026-02-20 10:03:06.512 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 05:03:06 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:03:06.794 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:03:06Z, description=, device_id=91bb17b9-dbc2-4da3-ba37-b6215b1cc229, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4b7c451b-e5ef-4ef3-938c-b71cc8d8f990, ip_allocation=immediate, mac_address=fa:16:3e:20:40:8f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3840, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T10:03:06Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 05:03:07 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 05:03:07 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:03:07 localhost podman[326992]: 2026-02-20 10:03:07.009976342 +0000 UTC m=+0.059447940 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 05:03:07 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:03:07 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:03:07.454 263745 INFO neutron.agent.dhcp.agent [None req-9a35e567-0149-469b-9bf8-034300f7706a - - - - - -] DHCP configuration for ports {'4b7c451b-e5ef-4ef3-938c-b71cc8d8f990'} is completed#033[00m Feb 20 05:03:07 localhost nova_compute[280804]: 2026-02-20 10:03:07.512 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:03:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 117 KiB/s wr, 8 op/s Feb 20 05:03:07 localhost ovn_metadata_agent[161761]: 2026-02-20 10:03:07.744 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 05:03:07 localhost ovn_metadata_agent[161761]: 2026-02-20 10:03:07.744 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 05:03:07 localhost nova_compute[280804]: 2026-02-20 10:03:07.745 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:07 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "7729a724-86fe-4d43-a1c1-675788136bbb_64b5f976-f108-4782-8a7f-3fdbde23df86", "force": true, "format": "json"}]: dispatch Feb 20 05:03:07 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7729a724-86fe-4d43-a1c1-675788136bbb_64b5f976-f108-4782-8a7f-3fdbde23df86, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:03:07 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:03:07 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:03:07 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7729a724-86fe-4d43-a1c1-675788136bbb_64b5f976-f108-4782-8a7f-3fdbde23df86, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:03:07 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "7729a724-86fe-4d43-a1c1-675788136bbb", "force": true, "format": "json"}]: dispatch Feb 20 05:03:07 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7729a724-86fe-4d43-a1c1-675788136bbb, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:03:07 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:03:07 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:03:07 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7729a724-86fe-4d43-a1c1-675788136bbb, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:03:08 localhost nova_compute[280804]: 2026-02-20 10:03:08.012 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:03:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e266 do_prune osdmap full prune enabled Feb 20 05:03:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e267 e267: 6 total, 6 up, 6 in Feb 20 05:03:08 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e267: 6 total, 6 up, 6 in Feb 20 05:03:08 localhost nova_compute[280804]: 2026-02-20 10:03:08.507 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:03:08 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c93903cf-3015-40b5-a970-9c042e7db919", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:03:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c93903cf-3015-40b5-a970-9c042e7db919, vol_name:cephfs) < "" Feb 20 05:03:08 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c93903cf-3015-40b5-a970-9c042e7db919/.meta.tmp' Feb 20 05:03:08 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c93903cf-3015-40b5-a970-9c042e7db919/.meta.tmp' to config b'/volumes/_nogroup/c93903cf-3015-40b5-a970-9c042e7db919/.meta' Feb 20 05:03:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c93903cf-3015-40b5-a970-9c042e7db919, vol_name:cephfs) < "" Feb 20 05:03:08 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c93903cf-3015-40b5-a970-9c042e7db919", "format": "json"}]: dispatch Feb 20 05:03:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c93903cf-3015-40b5-a970-9c042e7db919, vol_name:cephfs) < "" Feb 20 05:03:08 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c93903cf-3015-40b5-a970-9c042e7db919, vol_name:cephfs) < "" Feb 20 05:03:09 localhost nova_compute[280804]: 2026-02-20 10:03:09.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:03:09 localhost nova_compute[280804]: 2026-02-20 10:03:09.529 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:03:09 localhost nova_compute[280804]: 2026-02-20 10:03:09.530 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:03:09 localhost nova_compute[280804]: 2026-02-20 10:03:09.530 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:03:09 localhost nova_compute[280804]: 2026-02-20 10:03:09.530 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 05:03:09 localhost nova_compute[280804]: 2026-02-20 10:03:09.531 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:03:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 549 B/s rd, 45 KiB/s wr, 3 op/s Feb 20 05:03:09 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:03:09 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2808071945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:03:09 localhost nova_compute[280804]: 2026-02-20 10:03:09.978 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:03:10 localhost nova_compute[280804]: 2026-02-20 10:03:10.110 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:10 localhost nova_compute[280804]: 2026-02-20 10:03:10.182 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 05:03:10 localhost nova_compute[280804]: 2026-02-20 10:03:10.184 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11348MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 05:03:10 localhost nova_compute[280804]: 2026-02-20 10:03:10.184 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:03:10 localhost nova_compute[280804]: 2026-02-20 10:03:10.185 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:03:10 localhost nova_compute[280804]: 2026-02-20 10:03:10.232 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 05:03:10 localhost nova_compute[280804]: 2026-02-20 10:03:10.233 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 05:03:10 localhost nova_compute[280804]: 2026-02-20 10:03:10.254 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:03:10 localhost nova_compute[280804]: 2026-02-20 10:03:10.515 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:03:10 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3652150787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:03:10 localhost nova_compute[280804]: 2026-02-20 10:03:10.678 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:03:10 localhost nova_compute[280804]: 2026-02-20 10:03:10.684 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 05:03:10 localhost nova_compute[280804]: 2026-02-20 10:03:10.783 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:11 localhost nova_compute[280804]: 2026-02-20 10:03:11.557 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 05:03:11 localhost nova_compute[280804]: 2026-02-20 10:03:11.559 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 05:03:11 localhost nova_compute[280804]: 2026-02-20 10:03:11.559 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:03:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 480 B/s rd, 80 KiB/s wr, 5 op/s Feb 20 05:03:11 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c93903cf-3015-40b5-a970-9c042e7db919", "format": "json"}]: dispatch Feb 20 05:03:11 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c93903cf-3015-40b5-a970-9c042e7db919, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:11 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c93903cf-3015-40b5-a970-9c042e7db919, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:11 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:03:11.980+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c93903cf-3015-40b5-a970-9c042e7db919' of type subvolume Feb 20 05:03:11 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c93903cf-3015-40b5-a970-9c042e7db919' of type subvolume Feb 20 05:03:11 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c93903cf-3015-40b5-a970-9c042e7db919", "force": true, "format": "json"}]: dispatch Feb 20 05:03:11 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c93903cf-3015-40b5-a970-9c042e7db919, vol_name:cephfs) < "" Feb 20 05:03:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c93903cf-3015-40b5-a970-9c042e7db919'' moved to trashcan Feb 20 05:03:12 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:03:12 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c93903cf-3015-40b5-a970-9c042e7db919, vol_name:cephfs) < "" Feb 20 05:03:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e267 do_prune osdmap full prune enabled Feb 20 05:03:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e268 e268: 6 total, 6 up, 6 in Feb 20 05:03:12 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e268: 6 total, 6 up, 6 in Feb 20 05:03:12 localhost nova_compute[280804]: 2026-02-20 10:03:12.560 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:03:12 localhost nova_compute[280804]: 2026-02-20 10:03:12.561 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:03:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:03:13 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "73ef4b22-cb69-44e4-9b94-352c732420be_2d5a5261-1879-492b-a5b5-9562795ecfa9", "force": true, "format": "json"}]: dispatch Feb 20 05:03:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:73ef4b22-cb69-44e4-9b94-352c732420be_2d5a5261-1879-492b-a5b5-9562795ecfa9, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:03:13 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:03:13 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:03:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:73ef4b22-cb69-44e4-9b94-352c732420be_2d5a5261-1879-492b-a5b5-9562795ecfa9, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:03:13 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "73ef4b22-cb69-44e4-9b94-352c732420be", "force": true, "format": "json"}]: dispatch Feb 20 05:03:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:73ef4b22-cb69-44e4-9b94-352c732420be, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:03:13 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' Feb 20 05:03:13 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta.tmp' to config b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d/.meta' Feb 20 05:03:13 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:73ef4b22-cb69-44e4-9b94-352c732420be, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:03:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 43 KiB/s wr, 2 op/s Feb 20 05:03:13 localhost ovn_metadata_agent[161761]: 2026-02-20 10:03:13.747 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 05:03:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "949fccd6-b67a-49fe-87fe-ecca6c52f167", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:03:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:949fccd6-b67a-49fe-87fe-ecca6c52f167, vol_name:cephfs) < "" Feb 20 05:03:15 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/949fccd6-b67a-49fe-87fe-ecca6c52f167/.meta.tmp' Feb 20 05:03:15 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/949fccd6-b67a-49fe-87fe-ecca6c52f167/.meta.tmp' to config b'/volumes/_nogroup/949fccd6-b67a-49fe-87fe-ecca6c52f167/.meta' Feb 20 05:03:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:949fccd6-b67a-49fe-87fe-ecca6c52f167, vol_name:cephfs) < "" Feb 20 05:03:15 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "949fccd6-b67a-49fe-87fe-ecca6c52f167", "format": "json"}]: dispatch Feb 20 05:03:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:949fccd6-b67a-49fe-87fe-ecca6c52f167, vol_name:cephfs) < "" Feb 20 05:03:15 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:949fccd6-b67a-49fe-87fe-ecca6c52f167, vol_name:cephfs) < "" Feb 20 05:03:15 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 05:03:15 localhost podman[327075]: 2026-02-20 10:03:15.475495732 +0000 UTC m=+0.061956268 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Feb 20 05:03:15 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:03:15 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:03:15 localhost nova_compute[280804]: 2026-02-20 10:03:15.555 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 97 KiB/s wr, 6 op/s Feb 20 05:03:15 localhost nova_compute[280804]: 2026-02-20 10:03:15.742 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:15 localhost nova_compute[280804]: 2026-02-20 10:03:15.786 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:16 localhost podman[241347]: time="2026-02-20T10:03:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 05:03:16 localhost podman[241347]: @ - - [20/Feb/2026:10:03:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 05:03:16 localhost podman[241347]: @ - - [20/Feb/2026:10:03:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18826 "" "Go-http-client/1.1" Feb 20 05:03:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "format": "json"}]: dispatch Feb 20 05:03:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:16 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:03:16.675+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d' of type subvolume Feb 20 05:03:16 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d' of type subvolume Feb 20 05:03:16 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "force": true, "format": "json"}]: dispatch Feb 20 05:03:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:03:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d'' moved to trashcan Feb 20 05:03:16 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:03:16 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d, vol_name:cephfs) < "" Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.267 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.268 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.269 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.269 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.269 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.269 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.269 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.269 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.269 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.269 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.270 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.270 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.270 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.270 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.270 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.270 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.270 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.271 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:03:17.271 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:03:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e268 do_prune osdmap full prune enabled Feb 20 05:03:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 05:03:17 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e269 e269: 6 total, 6 up, 6 in Feb 20 05:03:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 05:03:17 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e269: 6 total, 6 up, 6 in Feb 20 05:03:17 localhost podman[327096]: 2026-02-20 10:03:17.463460558 +0000 UTC m=+0.095986552 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 05:03:17 localhost podman[327095]: 2026-02-20 10:03:17.502564449 +0000 UTC m=+0.138086194 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, version=9.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public) Feb 20 05:03:17 localhost podman[327095]: 2026-02-20 10:03:17.516617627 +0000 UTC m=+0.152139382 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, version=9.7, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 20 05:03:17 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 05:03:17 localhost podman[327096]: 2026-02-20 10:03:17.603097483 +0000 UTC m=+0.235623457 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Feb 20 05:03:17 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 05:03:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 97 KiB/s wr, 6 op/s Feb 20 05:03:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:03:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e269 do_prune osdmap full prune enabled Feb 20 05:03:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e270 e270: 6 total, 6 up, 6 in Feb 20 05:03:18 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e270: 6 total, 6 up, 6 in Feb 20 05:03:18 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "949fccd6-b67a-49fe-87fe-ecca6c52f167", "format": "json"}]: dispatch Feb 20 05:03:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:949fccd6-b67a-49fe-87fe-ecca6c52f167, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:949fccd6-b67a-49fe-87fe-ecca6c52f167, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:18 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:03:18.673+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '949fccd6-b67a-49fe-87fe-ecca6c52f167' of type subvolume Feb 20 05:03:18 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '949fccd6-b67a-49fe-87fe-ecca6c52f167' of type subvolume Feb 20 05:03:18 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "949fccd6-b67a-49fe-87fe-ecca6c52f167", "force": true, "format": "json"}]: dispatch Feb 20 05:03:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:949fccd6-b67a-49fe-87fe-ecca6c52f167, vol_name:cephfs) < "" Feb 20 05:03:18 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/949fccd6-b67a-49fe-87fe-ecca6c52f167'' moved to trashcan Feb 20 05:03:18 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:03:18 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:949fccd6-b67a-49fe-87fe-ecca6c52f167, vol_name:cephfs) < "" Feb 20 05:03:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 549 B/s rd, 58 KiB/s wr, 4 op/s Feb 20 05:03:20 localhost nova_compute[280804]: 2026-02-20 10:03:20.557 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:20 localhost nova_compute[280804]: 2026-02-20 10:03:20.813 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 131 KiB/s wr, 7 op/s Feb 20 05:03:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:03:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006, vol_name:cephfs) < "" Feb 20 05:03:22 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006/.meta.tmp' Feb 20 05:03:22 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006/.meta.tmp' to config b'/volumes/_nogroup/d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006/.meta' Feb 20 05:03:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006, vol_name:cephfs) < "" Feb 20 05:03:22 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006", "format": "json"}]: dispatch Feb 20 05:03:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006, vol_name:cephfs) < "" Feb 20 05:03:22 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006, vol_name:cephfs) < "" Feb 20 05:03:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:03:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e270 do_prune osdmap full prune enabled Feb 20 05:03:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e271 e271: 6 total, 6 up, 6 in Feb 20 05:03:23 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e271: 6 total, 6 up, 6 in Feb 20 05:03:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_10:03:23 Feb 20 05:03:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 05:03:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 05:03:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['images', 'vms', '.mgr', 'manila_metadata', 'backups', 'manila_data', 'volumes'] Feb 20 05:03:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 05:03:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:03:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:03:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:03:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:03:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 642 B/s rd, 97 KiB/s wr, 5 op/s Feb 20 05:03:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:03:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Feb 20 05:03:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00010850694444444444 quantized to 32 (current 32) Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:03:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0018607032558626466 of space, bias 4.0, pg target 1.4811197916666667 quantized to 16 (current 16) Feb 20 05:03:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 05:03:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 05:03:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 05:03:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 05:03:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 05:03:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 05:03:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 05:03:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 05:03:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 05:03:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 05:03:24 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e55: np0005625202.arwxwo(active, since 15m), standbys: np0005625203.lonygy, np0005625204.exgrzx Feb 20 05:03:25 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006", "format": "json"}]: dispatch Feb 20 05:03:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:25 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:03:25.347+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd3ef7044-f1cf-4b2a-bbcd-13fc0eea0006' of type subvolume Feb 20 05:03:25 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd3ef7044-f1cf-4b2a-bbcd-13fc0eea0006' of type subvolume Feb 20 05:03:25 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006", "force": true, "format": "json"}]: dispatch Feb 20 05:03:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006, vol_name:cephfs) < "" Feb 20 05:03:25 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006'' moved to trashcan Feb 20 05:03:25 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:03:25 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006, vol_name:cephfs) < "" Feb 20 05:03:25 localhost nova_compute[280804]: 2026-02-20 10:03:25.560 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 97 KiB/s wr, 6 op/s Feb 20 05:03:25 localhost nova_compute[280804]: 2026-02-20 10:03:25.818 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:25 localhost sshd[327134]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:03:26 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:03:26 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, vol_name:cephfs) < "" Feb 20 05:03:27 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7e87e16b-7877-4c1f-b090-d18ae7e41b0b/.meta.tmp' Feb 20 05:03:27 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7e87e16b-7877-4c1f-b090-d18ae7e41b0b/.meta.tmp' to config b'/volumes/_nogroup/7e87e16b-7877-4c1f-b090-d18ae7e41b0b/.meta' Feb 20 05:03:27 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, vol_name:cephfs) < "" Feb 20 05:03:27 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "format": "json"}]: dispatch Feb 20 05:03:27 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, vol_name:cephfs) < "" Feb 20 05:03:27 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, vol_name:cephfs) < "" Feb 20 05:03:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 645 B/s rd, 81 KiB/s wr, 5 op/s Feb 20 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 05:03:27 localhost podman[327138]: 2026-02-20 10:03:27.985725824 +0000 UTC m=+0.082285323 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 05:03:28 localhost podman[327136]: 2026-02-20 10:03:28.041180056 +0000 UTC m=+0.139909184 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 05:03:28 localhost podman[327137]: 2026-02-20 10:03:28.097555051 +0000 UTC m=+0.193232587 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 05:03:28 localhost podman[327138]: 2026-02-20 10:03:28.126099839 +0000 UTC m=+0.222659398 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Feb 20 05:03:28 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 05:03:28 localhost openstack_network_exporter[243776]: ERROR 10:03:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 05:03:28 localhost openstack_network_exporter[243776]: Feb 20 05:03:28 localhost openstack_network_exporter[243776]: ERROR 10:03:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 05:03:28 localhost openstack_network_exporter[243776]: Feb 20 05:03:28 localhost podman[327136]: 2026-02-20 10:03:28.180989985 +0000 UTC m=+0.279719093 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Feb 20 05:03:28 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 05:03:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:03:28 localhost podman[327137]: 2026-02-20 10:03:28.232869221 +0000 UTC m=+0.328546727 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 05:03:28 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 05:03:28 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "29715001-dc5b-4019-b356-72b85ec77e38", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:03:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:29715001-dc5b-4019-b356-72b85ec77e38, vol_name:cephfs) < "" Feb 20 05:03:28 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/29715001-dc5b-4019-b356-72b85ec77e38/.meta.tmp' Feb 20 05:03:28 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/29715001-dc5b-4019-b356-72b85ec77e38/.meta.tmp' to config b'/volumes/_nogroup/29715001-dc5b-4019-b356-72b85ec77e38/.meta' Feb 20 05:03:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:29715001-dc5b-4019-b356-72b85ec77e38, vol_name:cephfs) < "" Feb 20 05:03:28 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "29715001-dc5b-4019-b356-72b85ec77e38", "format": "json"}]: dispatch Feb 20 05:03:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:29715001-dc5b-4019-b356-72b85ec77e38, vol_name:cephfs) < "" Feb 20 05:03:28 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:29715001-dc5b-4019-b356-72b85ec77e38, vol_name:cephfs) < "" Feb 20 05:03:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 78 KiB/s wr, 5 op/s Feb 20 05:03:30 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "snap_name": "16d51dc9-4db6-4a28-af31-2025dc25f7ce", "format": "json"}]: dispatch Feb 20 05:03:30 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:16d51dc9-4db6-4a28-af31-2025dc25f7ce, sub_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, vol_name:cephfs) < "" Feb 20 05:03:30 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:16d51dc9-4db6-4a28-af31-2025dc25f7ce, sub_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, vol_name:cephfs) < "" Feb 20 05:03:30 localhost nova_compute[280804]: 2026-02-20 10:03:30.562 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:30 localhost nova_compute[280804]: 2026-02-20 10:03:30.819 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 67 KiB/s wr, 4 op/s Feb 20 05:03:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "29715001-dc5b-4019-b356-72b85ec77e38", "format": "json"}]: dispatch Feb 20 05:03:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:29715001-dc5b-4019-b356-72b85ec77e38, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:29715001-dc5b-4019-b356-72b85ec77e38, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:32 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:03:32.034+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '29715001-dc5b-4019-b356-72b85ec77e38' of type subvolume Feb 20 05:03:32 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '29715001-dc5b-4019-b356-72b85ec77e38' of type subvolume Feb 20 05:03:32 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "29715001-dc5b-4019-b356-72b85ec77e38", "force": true, "format": "json"}]: dispatch Feb 20 05:03:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:29715001-dc5b-4019-b356-72b85ec77e38, vol_name:cephfs) < "" Feb 20 05:03:32 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/29715001-dc5b-4019-b356-72b85ec77e38'' moved to trashcan Feb 20 05:03:32 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:03:32 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:29715001-dc5b-4019-b356-72b85ec77e38, vol_name:cephfs) < "" Feb 20 05:03:32 localhost sshd[327198]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:03:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:03:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 389 B/s rd, 64 KiB/s wr, 4 op/s Feb 20 05:03:33 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "snap_name": "16d51dc9-4db6-4a28-af31-2025dc25f7ce_1e999bb9-0d1d-43f6-a0e7-cfabc45a8c84", "force": true, "format": "json"}]: dispatch Feb 20 05:03:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:16d51dc9-4db6-4a28-af31-2025dc25f7ce_1e999bb9-0d1d-43f6-a0e7-cfabc45a8c84, sub_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, vol_name:cephfs) < "" Feb 20 05:03:33 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7e87e16b-7877-4c1f-b090-d18ae7e41b0b/.meta.tmp' Feb 20 05:03:33 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7e87e16b-7877-4c1f-b090-d18ae7e41b0b/.meta.tmp' to config b'/volumes/_nogroup/7e87e16b-7877-4c1f-b090-d18ae7e41b0b/.meta' Feb 20 05:03:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:16d51dc9-4db6-4a28-af31-2025dc25f7ce_1e999bb9-0d1d-43f6-a0e7-cfabc45a8c84, sub_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, vol_name:cephfs) < "" Feb 20 05:03:33 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "snap_name": "16d51dc9-4db6-4a28-af31-2025dc25f7ce", "force": true, "format": "json"}]: dispatch Feb 20 05:03:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:16d51dc9-4db6-4a28-af31-2025dc25f7ce, sub_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, vol_name:cephfs) < "" Feb 20 05:03:33 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7e87e16b-7877-4c1f-b090-d18ae7e41b0b/.meta.tmp' Feb 20 05:03:33 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7e87e16b-7877-4c1f-b090-d18ae7e41b0b/.meta.tmp' to config b'/volumes/_nogroup/7e87e16b-7877-4c1f-b090-d18ae7e41b0b/.meta' Feb 20 05:03:33 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:16d51dc9-4db6-4a28-af31-2025dc25f7ce, sub_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, vol_name:cephfs) < "" Feb 20 05:03:35 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Feb 20 05:03:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c, vol_name:cephfs) < "" Feb 20 05:03:35 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c/.meta.tmp' Feb 20 05:03:35 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c/.meta.tmp' to config b'/volumes/_nogroup/4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c/.meta' Feb 20 05:03:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c, vol_name:cephfs) < "" Feb 20 05:03:35 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c", "format": "json"}]: dispatch Feb 20 05:03:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c, vol_name:cephfs) < "" Feb 20 05:03:35 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c, vol_name:cephfs) < "" Feb 20 05:03:35 localhost nova_compute[280804]: 2026-02-20 10:03:35.579 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 96 KiB/s wr, 5 op/s Feb 20 05:03:35 localhost nova_compute[280804]: 2026-02-20 10:03:35.820 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:37 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "format": "json"}]: dispatch Feb 20 05:03:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:37 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7e87e16b-7877-4c1f-b090-d18ae7e41b0b' of type subvolume Feb 20 05:03:37 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:03:37.135+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7e87e16b-7877-4c1f-b090-d18ae7e41b0b' of type subvolume Feb 20 05:03:37 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "force": true, "format": "json"}]: dispatch Feb 20 05:03:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, vol_name:cephfs) < "" Feb 20 05:03:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7e87e16b-7877-4c1f-b090-d18ae7e41b0b'' moved to trashcan Feb 20 05:03:37 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:03:37 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7e87e16b-7877-4c1f-b090-d18ae7e41b0b, vol_name:cephfs) < "" Feb 20 05:03:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e271 do_prune osdmap full prune enabled Feb 20 05:03:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e272 e272: 6 total, 6 up, 6 in Feb 20 05:03:37 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e272: 6 total, 6 up, 6 in Feb 20 05:03:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 05:03:37 localhost podman[327201]: 2026-02-20 10:03:37.41954118 +0000 UTC m=+0.067525787 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter) Feb 20 05:03:37 localhost podman[327201]: 2026-02-20 10:03:37.457882781 +0000 UTC m=+0.105867418 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors ) Feb 20 05:03:37 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 05:03:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 99 KiB/s wr, 5 op/s Feb 20 05:03:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:03:38 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c", "format": "json"}]: dispatch Feb 20 05:03:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:38 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:03:38.620+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c' of type subvolume Feb 20 05:03:38 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c' of type subvolume Feb 20 05:03:38 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c", "force": true, "format": "json"}]: dispatch Feb 20 05:03:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c, vol_name:cephfs) < "" Feb 20 05:03:38 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c'' moved to trashcan Feb 20 05:03:38 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:03:38 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c, vol_name:cephfs) < "" Feb 20 05:03:39 localhost sshd[327224]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:03:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 99 KiB/s wr, 5 op/s Feb 20 05:03:40 localhost nova_compute[280804]: 2026-02-20 10:03:40.609 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:40 localhost nova_compute[280804]: 2026-02-20 10:03:40.822 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 108 KiB/s wr, 5 op/s Feb 20 05:03:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ccc69125-8271-465f-a7cf-99b18598188c", "snap_name": "7873ec63-b44a-47a7-8bbe-8f944c5b9a9d_f5381e37-8883-4e7e-9251-e9f21c220e6c", "force": true, "format": "json"}]: dispatch Feb 20 05:03:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7873ec63-b44a-47a7-8bbe-8f944c5b9a9d_f5381e37-8883-4e7e-9251-e9f21c220e6c, sub_name:ccc69125-8271-465f-a7cf-99b18598188c, vol_name:cephfs) < "" Feb 20 05:03:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ccc69125-8271-465f-a7cf-99b18598188c/.meta.tmp' Feb 20 05:03:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ccc69125-8271-465f-a7cf-99b18598188c/.meta.tmp' to config b'/volumes/_nogroup/ccc69125-8271-465f-a7cf-99b18598188c/.meta' Feb 20 05:03:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7873ec63-b44a-47a7-8bbe-8f944c5b9a9d_f5381e37-8883-4e7e-9251-e9f21c220e6c, sub_name:ccc69125-8271-465f-a7cf-99b18598188c, vol_name:cephfs) < "" Feb 20 05:03:42 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ccc69125-8271-465f-a7cf-99b18598188c", "snap_name": "7873ec63-b44a-47a7-8bbe-8f944c5b9a9d", "force": true, "format": "json"}]: dispatch Feb 20 05:03:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7873ec63-b44a-47a7-8bbe-8f944c5b9a9d, sub_name:ccc69125-8271-465f-a7cf-99b18598188c, vol_name:cephfs) < "" Feb 20 05:03:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ccc69125-8271-465f-a7cf-99b18598188c/.meta.tmp' Feb 20 05:03:42 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ccc69125-8271-465f-a7cf-99b18598188c/.meta.tmp' to config b'/volumes/_nogroup/ccc69125-8271-465f-a7cf-99b18598188c/.meta' Feb 20 05:03:42 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7873ec63-b44a-47a7-8bbe-8f944c5b9a9d, sub_name:ccc69125-8271-465f-a7cf-99b18598188c, vol_name:cephfs) < "" Feb 20 05:03:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:03:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e272 do_prune osdmap full prune enabled Feb 20 05:03:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e273 e273: 6 total, 6 up, 6 in Feb 20 05:03:43 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e273: 6 total, 6 up, 6 in Feb 20 05:03:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 75 KiB/s wr, 4 op/s Feb 20 05:03:45 localhost nova_compute[280804]: 2026-02-20 10:03:45.639 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:45 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ccc69125-8271-465f-a7cf-99b18598188c", "format": "json"}]: dispatch Feb 20 05:03:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ccc69125-8271-465f-a7cf-99b18598188c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ccc69125-8271-465f-a7cf-99b18598188c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Feb 20 05:03:45 localhost ceph-mgr[286565]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ccc69125-8271-465f-a7cf-99b18598188c' of type subvolume Feb 20 05:03:45 localhost ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625202-arwxwo[286561]: 2026-02-20T10:03:45.690+0000 7f74524d4640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ccc69125-8271-465f-a7cf-99b18598188c' of type subvolume Feb 20 05:03:45 localhost ceph-mgr[286565]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ccc69125-8271-465f-a7cf-99b18598188c", "force": true, "format": "json"}]: dispatch Feb 20 05:03:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ccc69125-8271-465f-a7cf-99b18598188c, vol_name:cephfs) < "" Feb 20 05:03:45 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ccc69125-8271-465f-a7cf-99b18598188c'' moved to trashcan Feb 20 05:03:45 localhost ceph-mgr[286565]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Feb 20 05:03:45 localhost ceph-mgr[286565]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ccc69125-8271-465f-a7cf-99b18598188c, vol_name:cephfs) < "" Feb 20 05:03:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 969 B/s rd, 100 KiB/s wr, 7 op/s Feb 20 05:03:45 localhost nova_compute[280804]: 2026-02-20 10:03:45.825 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:46 localhost podman[241347]: time="2026-02-20T10:03:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 05:03:46 localhost podman[241347]: @ - - [20/Feb/2026:10:03:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 05:03:46 localhost podman[241347]: @ - - [20/Feb/2026:10:03:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18825 "" "Go-http-client/1.1" Feb 20 05:03:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e273 do_prune osdmap full prune enabled Feb 20 05:03:47 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e274 e274: 6 total, 6 up, 6 in Feb 20 05:03:47 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e274: 6 total, 6 up, 6 in Feb 20 05:03:47 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:03:47.527 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:03:47Z, description=, device_id=589aa915-f12b-4442-ae0f-97795f57950f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0eb8069c-ed31-4344-ae3a-9c9aab2b401e, ip_allocation=immediate, mac_address=fa:16:3e:fe:de:8b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3899, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T10:03:47Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 05:03:47 localhost nova_compute[280804]: 2026-02-20 10:03:47.641 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:47 localhost ovn_metadata_agent[161761]: 2026-02-20 10:03:47.638 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 05:03:47 localhost ovn_metadata_agent[161761]: 2026-02-20 10:03:47.640 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Feb 20 05:03:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 105 KiB/s wr, 6 op/s Feb 20 05:03:47 localhost systemd[1]: tmp-crun.rcM7tV.mount: Deactivated successfully. Feb 20 05:03:47 localhost podman[327243]: 2026-02-20 10:03:47.800649647 +0000 UTC m=+0.067951759 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Feb 20 05:03:47 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 05:03:47 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:03:47 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:03:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 05:03:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 05:03:47 localhost podman[327255]: 2026-02-20 10:03:47.904065578 +0000 UTC m=+0.084004400 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, container_name=openstack_network_exporter, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Feb 20 05:03:47 localhost podman[327255]: 2026-02-20 10:03:47.922775721 +0000 UTC m=+0.102714593 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1770267347, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, version=9.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Feb 20 05:03:47 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 05:03:47 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:03:47.984 263745 INFO neutron.agent.dhcp.agent [None req-69c4c236-0bfe-4537-abca-d87ce7484530 - - - - - -] DHCP configuration for ports {'0eb8069c-ed31-4344-ae3a-9c9aab2b401e'} is completed#033[00m Feb 20 05:03:48 localhost podman[327257]: 2026-02-20 10:03:48.009664947 +0000 UTC m=+0.185813237 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Feb 20 05:03:48 localhost podman[327257]: 2026-02-20 10:03:48.046159459 +0000 UTC m=+0.222307799 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127) Feb 20 05:03:48 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 05:03:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:03:48 localhost nova_compute[280804]: 2026-02-20 10:03:48.294 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 31 KiB/s wr, 4 op/s Feb 20 05:03:50 localhost ovn_metadata_agent[161761]: 2026-02-20 10:03:50.643 161766 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0a83b6be-9fe2-42ef-8768-88847d97b165, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Feb 20 05:03:50 localhost nova_compute[280804]: 2026-02-20 10:03:50.682 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:50 localhost nova_compute[280804]: 2026-02-20 10:03:50.822 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:50 localhost nova_compute[280804]: 2026-02-20 10:03:50.828 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 722 B/s rd, 60 KiB/s wr, 5 op/s Feb 20 05:03:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 05:03:52 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 05:03:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 05:03:52 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 05:03:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 05:03:52 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:03:52 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 5d297f84-33e0-4e4d-957c-af7703fba9ea (Updating node-proxy deployment (+3 -> 3)) Feb 20 05:03:52 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 5d297f84-33e0-4e4d-957c-af7703fba9ea (Updating node-proxy deployment (+3 -> 3)) Feb 20 05:03:52 localhost ceph-mgr[286565]: [progress INFO root] Completed event 5d297f84-33e0-4e4d-957c-af7703fba9ea (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 05:03:52 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 05:03:52 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 05:03:52 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 05:03:52 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:03:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:03:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e274 do_prune osdmap full prune enabled Feb 20 05:03:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e275 e275: 6 total, 6 up, 6 in Feb 20 05:03:53 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e275: 6 total, 6 up, 6 in Feb 20 05:03:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:03:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:03:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:03:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:03:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 32 KiB/s wr, 1 op/s Feb 20 05:03:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:03:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:03:53 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 05:03:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 05:03:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:03:54 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:03:55 localhost nova_compute[280804]: 2026-02-20 10:03:55.726 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 246 B/s rd, 47 KiB/s wr, 2 op/s Feb 20 05:03:55 localhost nova_compute[280804]: 2026-02-20 10:03:55.832 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 39 KiB/s wr, 2 op/s Feb 20 05:03:58 localhost openstack_network_exporter[243776]: ERROR 10:03:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 05:03:58 localhost openstack_network_exporter[243776]: Feb 20 05:03:58 localhost openstack_network_exporter[243776]: ERROR 10:03:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 05:03:58 localhost openstack_network_exporter[243776]: Feb 20 05:03:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0. Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.252545) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70 Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581838252623, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1096, "num_deletes": 265, "total_data_size": 1188643, "memory_usage": 1212840, "flush_reason": "Manual Compaction"} Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581838261212, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 1169299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38526, "largest_seqno": 39620, "table_properties": {"data_size": 1164271, "index_size": 2435, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12322, "raw_average_key_size": 20, "raw_value_size": 1153574, "raw_average_value_size": 1935, "num_data_blocks": 106, "num_entries": 596, "num_filter_entries": 596, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581780, "oldest_key_time": 1771581780, "file_creation_time": 1771581838, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}} Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 8712 microseconds, and 3953 cpu microseconds. Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.261265) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 1169299 bytes OK Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.261286) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.263252) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.263273) EVENT_LOG_v1 {"time_micros": 1771581838263267, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.263295) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1183334, prev total WAL file size 1183658, number of live WAL files 2. Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.263929) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323732' seq:72057594037927935, type:22 .. '6C6F676D0034353234' seq:0, type:0; will stop at (end) Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(1141KB)], [69(18MB)] Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581838264004, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 20659029, "oldest_snapshot_seqno": -1} Feb 20 05:03:58 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:03:58.317 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:03:58Z, description=, device_id=0fbc926f-1dd2-40aa-9227-c636fc57c1ff, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=80abfae2-1f3a-449c-ae5e-7e4e5fc7ddb5, ip_allocation=immediate, mac_address=fa:16:3e:67:81:6d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3927, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T10:03:58Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 14355 keys, 20442755 bytes, temperature: kUnknown Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581838363764, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 20442755, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20358067, "index_size": 47723, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35909, "raw_key_size": 383754, "raw_average_key_size": 26, "raw_value_size": 20111757, "raw_average_value_size": 1401, "num_data_blocks": 1798, "num_entries": 14355, "num_filter_entries": 14355, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771581838, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}} Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.364144) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 20442755 bytes Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.365724) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.9 rd, 204.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 18.6 +0.0 blob) out(19.5 +0.0 blob), read-write-amplify(35.2) write-amplify(17.5) OK, records in: 14903, records dropped: 548 output_compression: NoCompression Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.365752) EVENT_LOG_v1 {"time_micros": 1771581838365740, "job": 42, "event": "compaction_finished", "compaction_time_micros": 99869, "compaction_time_cpu_micros": 58897, "output_level": 6, "num_output_files": 1, "total_output_size": 20442755, "num_input_records": 14903, "num_output_records": 14355, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581838366059, "job": 42, "event": "table_file_deletion", "file_number": 71} Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581838369036, "job": 42, "event": "table_file_deletion", "file_number": 69} Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.263811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.369116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.369124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.369127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.369130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:03:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:03:58.369134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:03:58 localhost systemd[1]: tmp-crun.wXDjaB.mount: Deactivated successfully. Feb 20 05:03:58 localhost podman[327384]: 2026-02-20 10:03:58.456747582 +0000 UTC m=+0.094627875 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 05:03:58 localhost podman[327384]: 2026-02-20 10:03:58.494342773 +0000 UTC m=+0.132223136 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Feb 20 05:03:58 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 05:03:58 localhost podman[327386]: 2026-02-20 10:03:58.542730904 +0000 UTC m=+0.175769768 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Feb 20 05:03:58 localhost podman[327386]: 2026-02-20 10:03:58.572781072 +0000 UTC m=+0.205819936 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 05:03:58 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 05:03:58 localhost podman[327385]: 2026-02-20 10:03:58.494907518 +0000 UTC m=+0.129597335 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 05:03:58 localhost podman[327385]: 2026-02-20 10:03:58.623894997 +0000 UTC m=+0.258584804 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter) Feb 20 05:03:58 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 05:03:59 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 05:03:59 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:03:59 localhost podman[327467]: 2026-02-20 10:03:59.098014785 +0000 UTC m=+0.056099659 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true) Feb 20 05:03:59 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:03:59 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:03:59.262 263745 INFO neutron.agent.dhcp.agent [None req-39c9f249-17da-4fc6-9d99-20efbcdc3ba9 - - - - - -] DHCP configuration for ports {'80abfae2-1f3a-449c-ae5e-7e4e5fc7ddb5'} is completed#033[00m Feb 20 05:03:59 localhost nova_compute[280804]: 2026-02-20 10:03:59.578 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:03:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 39 KiB/s wr, 1 op/s Feb 20 05:04:00 localhost nova_compute[280804]: 2026-02-20 10:04:00.773 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:00 localhost nova_compute[280804]: 2026-02-20 10:04:00.835 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:01 localhost sshd[327487]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:04:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s Feb 20 05:04:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Feb 20 05:04:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1304243398' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Feb 20 05:04:02 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Feb 20 05:04:02 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1304243398' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Feb 20 05:04:02 localhost nova_compute[280804]: 2026-02-20 10:04:02.226 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:04:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s Feb 20 05:04:05 localhost nova_compute[280804]: 2026-02-20 10:04:05.512 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:04:05 localhost nova_compute[280804]: 2026-02-20 10:04:05.512 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 05:04:05 localhost nova_compute[280804]: 2026-02-20 10:04:05.513 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 05:04:05 localhost nova_compute[280804]: 2026-02-20 10:04:05.534 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 05:04:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s Feb 20 05:04:05 localhost nova_compute[280804]: 2026-02-20 10:04:05.796 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:05 localhost nova_compute[280804]: 2026-02-20 10:04:05.837 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:04:05.928 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:04:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:04:05.928 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:04:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:04:05.928 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:04:06 localhost nova_compute[280804]: 2026-02-20 10:04:06.260 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:06 localhost podman[327506]: 2026-02-20 10:04:06.29337159 +0000 UTC m=+0.048789553 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Feb 20 05:04:06 localhost systemd[1]: tmp-crun.o2gVu5.mount: Deactivated successfully. Feb 20 05:04:06 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 05:04:06 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:04:06 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:04:06 localhost nova_compute[280804]: 2026-02-20 10:04:06.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:04:07 localhost nova_compute[280804]: 2026-02-20 10:04:07.507 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:04:07 localhost nova_compute[280804]: 2026-02-20 10:04:07.529 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:04:07 localhost nova_compute[280804]: 2026-02-20 10:04:07.529 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 05:04:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s wr, 0 op/s Feb 20 05:04:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:04:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 05:04:08 localhost podman[327528]: 2026-02-20 10:04:08.449001964 +0000 UTC m=+0.086897208 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 05:04:08 localhost podman[327528]: 2026-02-20 10:04:08.463319839 +0000 UTC m=+0.101215083 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 05:04:08 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 05:04:08 localhost nova_compute[280804]: 2026-02-20 10:04:08.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:04:08 localhost nova_compute[280804]: 2026-02-20 10:04:08.513 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:04:09 localhost nova_compute[280804]: 2026-02-20 10:04:09.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:04:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s wr, 0 op/s Feb 20 05:04:10 localhost nova_compute[280804]: 2026-02-20 10:04:10.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:04:10 localhost nova_compute[280804]: 2026-02-20 10:04:10.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:04:10 localhost nova_compute[280804]: 2026-02-20 10:04:10.531 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:04:10 localhost nova_compute[280804]: 2026-02-20 10:04:10.531 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:04:10 localhost nova_compute[280804]: 2026-02-20 10:04:10.531 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:04:10 localhost nova_compute[280804]: 2026-02-20 10:04:10.532 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 05:04:10 localhost nova_compute[280804]: 2026-02-20 10:04:10.532 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:04:10 localhost nova_compute[280804]: 2026-02-20 10:04:10.845 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:10 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:04:10 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/974015829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:04:10 localhost nova_compute[280804]: 2026-02-20 10:04:10.985 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.170 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.171 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11339MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.171 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.171 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.228 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.228 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.232 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.246 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:04:11 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 05:04:11 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:04:11 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:04:11 localhost podman[327590]: 2026-02-20 10:04:11.271705026 +0000 UTC m=+0.053712895 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Feb 20 05:04:11 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:04:11 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3485150094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.691 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.698 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.717 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.720 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 05:04:11 localhost nova_compute[280804]: 2026-02-20 10:04:11.720 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:04:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s wr, 0 op/s Feb 20 05:04:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:04:13 localhost nova_compute[280804]: 2026-02-20 10:04:13.721 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:04:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s wr, 0 op/s Feb 20 05:04:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s wr, 0 op/s Feb 20 05:04:15 localhost nova_compute[280804]: 2026-02-20 10:04:15.847 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:15 localhost nova_compute[280804]: 2026-02-20 10:04:15.851 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:15 localhost sshd[327632]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:04:16 localhost podman[241347]: time="2026-02-20T10:04:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 05:04:16 localhost podman[241347]: @ - - [20/Feb/2026:10:04:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 05:04:16 localhost podman[241347]: @ - - [20/Feb/2026:10:04:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18827 "" "Go-http-client/1.1" Feb 20 05:04:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:04:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:04:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 05:04:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 05:04:18 localhost systemd[1]: tmp-crun.QuGpe3.mount: Deactivated successfully. Feb 20 05:04:18 localhost podman[327634]: 2026-02-20 10:04:18.459702303 +0000 UTC m=+0.098386116 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, release=1770267347, version=9.7, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible) Feb 20 05:04:18 localhost podman[327635]: 2026-02-20 10:04:18.512232446 +0000 UTC m=+0.146867991 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible) Feb 20 05:04:18 localhost podman[327635]: 2026-02-20 10:04:18.525120652 +0000 UTC m=+0.159756247 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Feb 20 05:04:18 localhost podman[327634]: 2026-02-20 10:04:18.525461151 +0000 UTC m=+0.164145004 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2026-02-05T04:57:10Z, vcs-type=git, io.openshift.tags=minimal rhel9, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z) Feb 20 05:04:18 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 05:04:18 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 05:04:18 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:04:18.941 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:04:18Z, description=, device_id=e57870c6-94a1-474d-9937-954c4e871cf2, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=57ab6c4d-cf32-4d47-b34d-df636a6c3f2a, ip_allocation=immediate, mac_address=fa:16:3e:e8:84:4b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3941, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T10:04:18Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 05:04:19 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 05:04:19 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:04:19 localhost podman[327688]: 2026-02-20 10:04:19.166520169 +0000 UTC m=+0.070315021 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 05:04:19 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:04:19 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:04:19.387 263745 INFO neutron.agent.dhcp.agent [None req-eaf0adb2-2aae-47c3-b518-ef060aa50957 - - - - - -] DHCP configuration for ports {'57ab6c4d-cf32-4d47-b34d-df636a6c3f2a'} is completed#033[00m Feb 20 05:04:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:04:20 localhost nova_compute[280804]: 2026-02-20 10:04:20.145 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:20 localhost nova_compute[280804]: 2026-02-20 10:04:20.849 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:20 localhost nova_compute[280804]: 2026-02-20 10:04:20.852 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:04:22 localhost nova_compute[280804]: 2026-02-20 10:04:22.263 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:04:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_10:04:23 Feb 20 05:04:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 05:04:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 05:04:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['.mgr', 'volumes', 'images', 'vms', 'manila_data', 'manila_metadata', 'backups'] Feb 20 05:04:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 05:04:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:04:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:04:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:04:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:04:23 localhost sshd[327709]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:04:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:04:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:04:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:04:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0020193742148241207 of space, bias 4.0, pg target 1.607421875 quantized to 16 (current 16) Feb 20 05:04:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 05:04:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 05:04:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 05:04:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 05:04:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 05:04:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 05:04:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 05:04:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 05:04:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 05:04:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 05:04:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:04:25 localhost nova_compute[280804]: 2026-02-20 10:04:25.857 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:26 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:04:26.483 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:04:26Z, description=, device_id=99766422-78a0-447f-ada4-35349b2290d9, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bd530e79-981b-47c6-9ae3-baba68cf5bee, ip_allocation=immediate, mac_address=fa:16:3e:d9:4c:e8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T08:22:50Z, description=, dns_domain=, id=84efa4de-646c-469c-b16c-6ab7c3e948cf, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=91bce661d685472eb3e7cacab17bf52a, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['38eee90c-2cdd-4118-ba55-5401f7e61e1e'], tags=[], tenant_id=91bce661d685472eb3e7cacab17bf52a, updated_at=2026-02-20T08:22:56Z, vlan_transparent=None, network_id=84efa4de-646c-469c-b16c-6ab7c3e948cf, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3955, status=DOWN, tags=[], tenant_id=, updated_at=2026-02-20T10:04:26Z on network 84efa4de-646c-469c-b16c-6ab7c3e948cf#033[00m Feb 20 05:04:26 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 3 addresses Feb 20 05:04:26 localhost podman[327729]: 2026-02-20 10:04:26.704717081 +0000 UTC m=+0.058210166 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Feb 20 05:04:26 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:04:26 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:04:26 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:04:26.869 263745 INFO neutron.agent.linux.ip_lib [None req-fdc95564-925a-42f6-beae-5926087847c5 - - - - - -] Device tap05a7975f-05 cannot be used as it has no MAC address#033[00m Feb 20 05:04:26 localhost nova_compute[280804]: 2026-02-20 10:04:26.904 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:26 localhost kernel: device tap05a7975f-05 entered promiscuous mode Feb 20 05:04:26 localhost NetworkManager[5967]: [1771581866.9119] manager: (tap05a7975f-05): new Generic device (/org/freedesktop/NetworkManager/Devices/53) Feb 20 05:04:26 localhost systemd-udevd[327761]: Network interface NamePolicy= disabled on kernel command line. Feb 20 05:04:26 localhost ovn_controller[155916]: 2026-02-20T10:04:26Z|00269|binding|INFO|Claiming lport 05a7975f-0576-4897-837f-5926db2618f7 for this chassis. Feb 20 05:04:26 localhost ovn_controller[155916]: 2026-02-20T10:04:26Z|00270|binding|INFO|05a7975f-0576-4897-837f-5926db2618f7: Claiming unknown Feb 20 05:04:26 localhost nova_compute[280804]: 2026-02-20 10:04:26.916 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:26 localhost ovn_metadata_agent[161761]: 2026-02-20 10:04:26.922 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-76662f9f-597a-4762-b229-2301d6ea7b01', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76662f9f-597a-4762-b229-2301d6ea7b01', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50ba22298b9844c2b9853d9ca1060aa4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c303fcc-c245-4a2b-b2e3-07d4493d9110, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=05a7975f-0576-4897-837f-5926db2618f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 05:04:26 localhost ovn_metadata_agent[161761]: 2026-02-20 10:04:26.923 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 05a7975f-0576-4897-837f-5926db2618f7 in datapath 76662f9f-597a-4762-b229-2301d6ea7b01 bound to our chassis#033[00m Feb 20 05:04:26 localhost ovn_metadata_agent[161761]: 2026-02-20 10:04:26.924 161766 DEBUG neutron.agent.ovn.metadata.agent [-] Port 156e05d1-031d-40c4-9731-a36f3655337f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Feb 20 05:04:26 localhost ovn_metadata_agent[161761]: 2026-02-20 10:04:26.925 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76662f9f-597a-4762-b229-2301d6ea7b01, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 05:04:26 localhost ovn_metadata_agent[161761]: 2026-02-20 10:04:26.927 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[92defa5c-87b8-4807-bd1e-647c1668484b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:04:26 localhost ovn_controller[155916]: 2026-02-20T10:04:26Z|00271|binding|INFO|Setting lport 05a7975f-0576-4897-837f-5926db2618f7 ovn-installed in OVS Feb 20 05:04:26 localhost ovn_controller[155916]: 2026-02-20T10:04:26Z|00272|binding|INFO|Setting lport 05a7975f-0576-4897-837f-5926db2618f7 up in Southbound Feb 20 05:04:26 localhost nova_compute[280804]: 2026-02-20 10:04:26.953 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:26 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:04:26.956 263745 INFO neutron.agent.dhcp.agent [None req-18efdfe7-4ec4-4aeb-a27e-3b00c150650e - - - - - -] DHCP configuration for ports {'bd530e79-981b-47c6-9ae3-baba68cf5bee'} is completed#033[00m Feb 20 05:04:26 localhost nova_compute[280804]: 2026-02-20 10:04:26.981 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:27 localhost nova_compute[280804]: 2026-02-20 10:04:27.005 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:27 localhost nova_compute[280804]: 2026-02-20 10:04:27.147 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:04:27 localhost podman[327816]: Feb 20 05:04:27 localhost podman[327816]: 2026-02-20 10:04:27.789490491 +0000 UTC m=+0.081182294 container create 06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76662f9f-597a-4762-b229-2301d6ea7b01, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Feb 20 05:04:27 localhost systemd[1]: Started libpod-conmon-06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f.scope. Feb 20 05:04:27 localhost podman[327816]: 2026-02-20 10:04:27.751913291 +0000 UTC m=+0.043605104 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Feb 20 05:04:27 localhost systemd[1]: Started libcrun container. Feb 20 05:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14abd8e73fecb0f1afba817753badd584adaba9aff3002d10e748ff34845c2b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Feb 20 05:04:27 localhost podman[327816]: 2026-02-20 10:04:27.868092994 +0000 UTC m=+0.159784787 container init 06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76662f9f-597a-4762-b229-2301d6ea7b01, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 05:04:27 localhost podman[327816]: 2026-02-20 10:04:27.880295953 +0000 UTC m=+0.171987796 container start 06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76662f9f-597a-4762-b229-2301d6ea7b01, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Feb 20 05:04:27 localhost dnsmasq[327835]: started, version 2.85 cachesize 150 Feb 20 05:04:27 localhost dnsmasq[327835]: DNS service limited to local subnets Feb 20 05:04:27 localhost dnsmasq[327835]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Feb 20 05:04:27 localhost dnsmasq[327835]: warning: no upstream servers configured Feb 20 05:04:27 localhost dnsmasq-dhcp[327835]: DHCP, static leases only on 10.100.0.0, lease time 1d Feb 20 05:04:27 localhost dnsmasq[327835]: read /var/lib/neutron/dhcp/76662f9f-597a-4762-b229-2301d6ea7b01/addn_hosts - 0 addresses Feb 20 05:04:27 localhost dnsmasq-dhcp[327835]: read /var/lib/neutron/dhcp/76662f9f-597a-4762-b229-2301d6ea7b01/host Feb 20 05:04:27 localhost dnsmasq-dhcp[327835]: read /var/lib/neutron/dhcp/76662f9f-597a-4762-b229-2301d6ea7b01/opts Feb 20 05:04:27 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:04:27.932 263745 INFO neutron.agent.dhcp.agent [None req-5da3fc4c-65c8-4ba3-9f5b-f143ca3421e4 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:04:27Z, description=, device_id=99766422-78a0-447f-ada4-35349b2290d9, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=811b34d7-27bf-4747-bf76-3e9b66fc1bac, ip_allocation=immediate, mac_address=fa:16:3e:b8:96:77, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T10:04:23Z, description=, dns_domain=, id=76662f9f-597a-4762-b229-2301d6ea7b01, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-141845655-network, port_security_enabled=True, project_id=50ba22298b9844c2b9853d9ca1060aa4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16183, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3948, status=ACTIVE, subnets=['e4539382-b8a6-44c0-bfcc-eeade7a99ebd'], tags=[], tenant_id=50ba22298b9844c2b9853d9ca1060aa4, updated_at=2026-02-20T10:04:24Z, vlan_transparent=None, network_id=76662f9f-597a-4762-b229-2301d6ea7b01, port_security_enabled=False, project_id=50ba22298b9844c2b9853d9ca1060aa4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3956, status=DOWN, tags=[], tenant_id=50ba22298b9844c2b9853d9ca1060aa4, updated_at=2026-02-20T10:04:27Z on network 76662f9f-597a-4762-b229-2301d6ea7b01#033[00m Feb 20 05:04:28 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:04:28.085 263745 INFO neutron.agent.dhcp.agent [None req-6b21391f-db98-4752-b255-d2f29d404d50 - - - - - -] DHCP configuration for ports {'ef0576e7-fa56-450a-9e87-fb385087b933'} is completed#033[00m Feb 20 05:04:28 localhost openstack_network_exporter[243776]: ERROR 10:04:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 05:04:28 localhost openstack_network_exporter[243776]: Feb 20 05:04:28 localhost openstack_network_exporter[243776]: ERROR 10:04:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 05:04:28 localhost openstack_network_exporter[243776]: Feb 20 05:04:28 localhost dnsmasq[327835]: read /var/lib/neutron/dhcp/76662f9f-597a-4762-b229-2301d6ea7b01/addn_hosts - 1 addresses Feb 20 05:04:28 localhost dnsmasq-dhcp[327835]: read /var/lib/neutron/dhcp/76662f9f-597a-4762-b229-2301d6ea7b01/host Feb 20 05:04:28 localhost dnsmasq-dhcp[327835]: read /var/lib/neutron/dhcp/76662f9f-597a-4762-b229-2301d6ea7b01/opts Feb 20 05:04:28 localhost podman[327853]: 2026-02-20 10:04:28.249443659 +0000 UTC m=+0.049264525 container kill 06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76662f9f-597a-4762-b229-2301d6ea7b01, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 05:04:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:04:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 05:04:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 05:04:28 localhost podman[327874]: 2026-02-20 10:04:28.701812374 +0000 UTC m=+0.084161215 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 20 05:04:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 05:04:28 localhost podman[327875]: 2026-02-20 10:04:28.755909058 +0000 UTC m=+0.134901108 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 05:04:28 localhost podman[327874]: 2026-02-20 10:04:28.769931545 +0000 UTC m=+0.152280386 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 05:04:28 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 05:04:28 localhost podman[327875]: 2026-02-20 10:04:28.790798597 +0000 UTC m=+0.169790597 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true) Feb 20 05:04:28 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 05:04:28 localhost podman[327916]: 2026-02-20 10:04:28.849509075 +0000 UTC m=+0.082595592 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Feb 20 05:04:28 localhost podman[327916]: 2026-02-20 10:04:28.861715313 +0000 UTC m=+0.094801860 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 05:04:28 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 05:04:28 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:04:28.944 263745 INFO neutron.agent.dhcp.agent [None req-aa75c070-25ee-41c4-8204-11283ffd85d1 - - - - - -] DHCP configuration for ports {'811b34d7-27bf-4747-bf76-3e9b66fc1bac'} is completed#033[00m Feb 20 05:04:29 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:04:29.655 263745 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:04:27Z, description=, device_id=99766422-78a0-447f-ada4-35349b2290d9, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=811b34d7-27bf-4747-bf76-3e9b66fc1bac, ip_allocation=immediate, mac_address=fa:16:3e:b8:96:77, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T10:04:23Z, description=, dns_domain=, id=76662f9f-597a-4762-b229-2301d6ea7b01, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-141845655-network, port_security_enabled=True, project_id=50ba22298b9844c2b9853d9ca1060aa4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16183, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3948, status=ACTIVE, subnets=['e4539382-b8a6-44c0-bfcc-eeade7a99ebd'], tags=[], tenant_id=50ba22298b9844c2b9853d9ca1060aa4, updated_at=2026-02-20T10:04:24Z, vlan_transparent=None, network_id=76662f9f-597a-4762-b229-2301d6ea7b01, port_security_enabled=False, project_id=50ba22298b9844c2b9853d9ca1060aa4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3956, status=DOWN, tags=[], tenant_id=50ba22298b9844c2b9853d9ca1060aa4, updated_at=2026-02-20T10:04:27Z on network 76662f9f-597a-4762-b229-2301d6ea7b01#033[00m Feb 20 05:04:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:04:29 localhost dnsmasq[327835]: read /var/lib/neutron/dhcp/76662f9f-597a-4762-b229-2301d6ea7b01/addn_hosts - 1 addresses Feb 20 05:04:29 localhost dnsmasq-dhcp[327835]: read /var/lib/neutron/dhcp/76662f9f-597a-4762-b229-2301d6ea7b01/host Feb 20 05:04:29 localhost podman[327955]: 2026-02-20 10:04:29.87921367 +0000 UTC m=+0.062913448 container kill 06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76662f9f-597a-4762-b229-2301d6ea7b01, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 05:04:29 localhost dnsmasq-dhcp[327835]: read /var/lib/neutron/dhcp/76662f9f-597a-4762-b229-2301d6ea7b01/opts Feb 20 05:04:30 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:04:30.097 263745 INFO neutron.agent.dhcp.agent [None req-ba1b15b6-5126-4805-a1dc-d499312e2f57 - - - - - -] DHCP configuration for ports {'811b34d7-27bf-4747-bf76-3e9b66fc1bac'} is completed#033[00m Feb 20 05:04:30 localhost nova_compute[280804]: 2026-02-20 10:04:30.905 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:31 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:04:33 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:04:33 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:04:35 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 4.2 KiB/s rd, 682 B/s wr, 6 op/s Feb 20 05:04:35 localhost nova_compute[280804]: 2026-02-20 10:04:35.910 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:04:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e275 do_prune osdmap full prune enabled Feb 20 05:04:36 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e276 e276: 6 total, 6 up, 6 in Feb 20 05:04:36 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e276: 6 total, 6 up, 6 in Feb 20 05:04:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e276 do_prune osdmap full prune enabled Feb 20 05:04:37 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e277 e277: 6 total, 6 up, 6 in Feb 20 05:04:37 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e277: 6 total, 6 up, 6 in Feb 20 05:04:37 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 6.4 KiB/s rd, 1023 B/s wr, 9 op/s Feb 20 05:04:38 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:04:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 05:04:39 localhost systemd[1]: tmp-crun.y9m33t.mount: Deactivated successfully. Feb 20 05:04:39 localhost podman[327977]: 2026-02-20 10:04:39.453874813 +0000 UTC m=+0.091202928 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 05:04:39 localhost podman[327977]: 2026-02-20 10:04:39.463782398 +0000 UTC m=+0.101110503 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 05:04:39 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 05:04:39 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 8.0 KiB/s rd, 1.6 MiB/s wr, 12 op/s Feb 20 05:04:40 localhost nova_compute[280804]: 2026-02-20 10:04:40.914 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:04:40 localhost nova_compute[280804]: 2026-02-20 10:04:40.916 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:04:40 localhost nova_compute[280804]: 2026-02-20 10:04:40.916 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 20 05:04:40 localhost nova_compute[280804]: 2026-02-20 10:04:40.917 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 20 05:04:40 localhost nova_compute[280804]: 2026-02-20 10:04:40.934 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:40 localhost nova_compute[280804]: 2026-02-20 10:04:40.936 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 20 05:04:41 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 55 op/s Feb 20 05:04:42 localhost nova_compute[280804]: 2026-02-20 10:04:42.828 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:42 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 2 addresses Feb 20 05:04:42 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:04:42 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:04:42 localhost podman[328018]: 2026-02-20 10:04:42.880586446 +0000 UTC m=+0.084855028 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 05:04:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:04:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e277 do_prune osdmap full prune enabled Feb 20 05:04:43 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e278 e278: 6 total, 6 up, 6 in Feb 20 05:04:43 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : osdmap e278: 6 total, 6 up, 6 in Feb 20 05:04:43 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 2.7 MiB/s wr, 48 op/s Feb 20 05:04:44 localhost dnsmasq[327835]: read /var/lib/neutron/dhcp/76662f9f-597a-4762-b229-2301d6ea7b01/addn_hosts - 0 addresses Feb 20 05:04:44 localhost dnsmasq-dhcp[327835]: read /var/lib/neutron/dhcp/76662f9f-597a-4762-b229-2301d6ea7b01/host Feb 20 05:04:44 localhost dnsmasq-dhcp[327835]: read /var/lib/neutron/dhcp/76662f9f-597a-4762-b229-2301d6ea7b01/opts Feb 20 05:04:44 localhost podman[328056]: 2026-02-20 10:04:44.576530018 +0000 UTC m=+0.070490245 container kill 06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76662f9f-597a-4762-b229-2301d6ea7b01, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image) Feb 20 05:04:44 localhost systemd[1]: tmp-crun.2jCfId.mount: Deactivated successfully. Feb 20 05:04:44 localhost nova_compute[280804]: 2026-02-20 10:04:44.766 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:44 localhost ovn_controller[155916]: 2026-02-20T10:04:44Z|00273|binding|INFO|Releasing lport 05a7975f-0576-4897-837f-5926db2618f7 from this chassis (sb_readonly=0) Feb 20 05:04:44 localhost kernel: device tap05a7975f-05 left promiscuous mode Feb 20 05:04:44 localhost ovn_controller[155916]: 2026-02-20T10:04:44Z|00274|binding|INFO|Setting lport 05a7975f-0576-4897-837f-5926db2618f7 down in Southbound Feb 20 05:04:44 localhost ovn_metadata_agent[161761]: 2026-02-20 10:04:44.776 161766 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625202.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp77c820b4-9646-5dae-aabe-b13a62b01d06-76662f9f-597a-4762-b229-2301d6ea7b01', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76662f9f-597a-4762-b229-2301d6ea7b01', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50ba22298b9844c2b9853d9ca1060aa4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625202.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c303fcc-c245-4a2b-b2e3-07d4493d9110, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=05a7975f-0576-4897-837f-5926db2618f7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Feb 20 05:04:44 localhost ovn_metadata_agent[161761]: 2026-02-20 10:04:44.778 161766 INFO neutron.agent.ovn.metadata.agent [-] Port 05a7975f-0576-4897-837f-5926db2618f7 in datapath 76662f9f-597a-4762-b229-2301d6ea7b01 unbound from our chassis#033[00m Feb 20 05:04:44 localhost ovn_metadata_agent[161761]: 2026-02-20 10:04:44.779 161766 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76662f9f-597a-4762-b229-2301d6ea7b01, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Feb 20 05:04:44 localhost ovn_metadata_agent[161761]: 2026-02-20 10:04:44.781 263903 DEBUG oslo.privsep.daemon [-] privsep: reply[44f35367-a8ce-444f-bc44-03b8c9b6a984]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Feb 20 05:04:44 localhost nova_compute[280804]: 2026-02-20 10:04:44.787 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:44 localhost nova_compute[280804]: 2026-02-20 10:04:44.788 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:45 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 2.4 MiB/s wr, 43 op/s Feb 20 05:04:45 localhost nova_compute[280804]: 2026-02-20 10:04:45.935 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:45 localhost nova_compute[280804]: 2026-02-20 10:04:45.938 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:46 localhost podman[241347]: time="2026-02-20T10:04:46Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 05:04:46 localhost podman[241347]: @ - - [20/Feb/2026:10:04:46 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159539 "" "Go-http-client/1.1" Feb 20 05:04:46 localhost podman[241347]: @ - - [20/Feb/2026:10:04:46 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19307 "" "Go-http-client/1.1" Feb 20 05:04:46 localhost dnsmasq[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/addn_hosts - 1 addresses Feb 20 05:04:46 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/host Feb 20 05:04:46 localhost dnsmasq-dhcp[264017]: read /var/lib/neutron/dhcp/84efa4de-646c-469c-b16c-6ab7c3e948cf/opts Feb 20 05:04:46 localhost podman[328095]: 2026-02-20 10:04:46.357650275 +0000 UTC m=+0.047076588 container kill d82938d26e43a39b66f7030f5aa8af272ca046a3c389efdd80984fdd34d01ec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-84efa4de-646c-469c-b16c-6ab7c3e948cf, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Feb 20 05:04:46 localhost nova_compute[280804]: 2026-02-20 10:04:46.396 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:46 localhost dnsmasq[327835]: exiting on receipt of SIGTERM Feb 20 05:04:46 localhost podman[328131]: 2026-02-20 10:04:46.778265509 +0000 UTC m=+0.048433196 container kill 06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76662f9f-597a-4762-b229-2301d6ea7b01, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Feb 20 05:04:46 localhost systemd[1]: libpod-06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f.scope: Deactivated successfully. Feb 20 05:04:46 localhost podman[328146]: 2026-02-20 10:04:46.837302517 +0000 UTC m=+0.040049512 container died 06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76662f9f-597a-4762-b229-2301d6ea7b01, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127) Feb 20 05:04:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f-userdata-shm.mount: Deactivated successfully. Feb 20 05:04:46 localhost systemd[1]: var-lib-containers-storage-overlay-14abd8e73fecb0f1afba817753badd584adaba9aff3002d10e748ff34845c2b7-merged.mount: Deactivated successfully. Feb 20 05:04:46 localhost podman[328146]: 2026-02-20 10:04:46.882426233 +0000 UTC m=+0.085173248 container remove 06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76662f9f-597a-4762-b229-2301d6ea7b01, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 05:04:46 localhost systemd[1]: libpod-conmon-06c6779bf496fb9817da4603acc49618e44356474cb6717383b2efc2a747a15f.scope: Deactivated successfully. Feb 20 05:04:46 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:04:46.921 263745 INFO neutron.agent.dhcp.agent [None req-b99f6c6c-5d3f-420b-8657-8d9e4ada39af - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 05:04:46 localhost neutron_dhcp_agent[263741]: 2026-02-20 10:04:46.936 263745 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Feb 20 05:04:47 localhost systemd[1]: run-netns-qdhcp\x2d76662f9f\x2d597a\x2d4762\x2db229\x2d2301d6ea7b01.mount: Deactivated successfully. Feb 20 05:04:47 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 2.0 MiB/s wr, 36 op/s Feb 20 05:04:48 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:04:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 05:04:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 05:04:49 localhost systemd[1]: tmp-crun.r8bGr9.mount: Deactivated successfully. Feb 20 05:04:49 localhost podman[328173]: 2026-02-20 10:04:49.451296089 +0000 UTC m=+0.089501844 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2) Feb 20 05:04:49 localhost podman[328173]: 2026-02-20 10:04:49.46183441 +0000 UTC m=+0.100040115 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4) Feb 20 05:04:49 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 05:04:49 localhost podman[328172]: 2026-02-20 10:04:49.539817244 +0000 UTC m=+0.180672710 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 05:04:49 localhost podman[328172]: 2026-02-20 10:04:49.551085045 +0000 UTC m=+0.191940501 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, version=9.7, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Feb 20 05:04:49 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 05:04:49 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 821 KiB/s wr, 33 op/s Feb 20 05:04:50 localhost nova_compute[280804]: 2026-02-20 10:04:50.942 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:51 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:04:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:04:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0) Feb 20 05:04:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0) Feb 20 05:04:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0) Feb 20 05:04:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0) Feb 20 05:04:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0) Feb 20 05:04:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:53 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:53 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:53 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:53 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0) Feb 20 05:04:53 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:04:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Feb 20 05:04:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 20 05:04:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:04:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Feb 20 05:04:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 20 05:04:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Feb 20 05:04:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:04:53 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:04:53 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:04:54 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:54 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:54 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:54 localhost ceph-mon[292786]: log_channel(cluster) log [DBG] : mgrmap e56: np0005625202.arwxwo(active, since 16m), standbys: np0005625203.lonygy, np0005625204.exgrzx Feb 20 05:04:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Feb 20 05:04:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch Feb 20 05:04:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Feb 20 05:04:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 05:04:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Feb 20 05:04:54 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:54 localhost ceph-mgr[286565]: [progress INFO root] update: starting ev 2c18964f-5673-4d6a-88c8-0f11854d77cb (Updating node-proxy deployment (+3 -> 3)) Feb 20 05:04:54 localhost ceph-mgr[286565]: [progress INFO root] complete: finished ev 2c18964f-5673-4d6a-88c8-0f11854d77cb (Updating node-proxy deployment (+3 -> 3)) Feb 20 05:04:54 localhost ceph-mgr[286565]: [progress INFO root] Completed event 2c18964f-5673-4d6a-88c8-0f11854d77cb (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Feb 20 05:04:54 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Feb 20 05:04:54 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Feb 20 05:04:55 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Feb 20 05:04:55 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:55 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Feb 20 05:04:55 localhost nova_compute[280804]: 2026-02-20 10:04:55.944 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:04:55 localhost nova_compute[280804]: 2026-02-20 10:04:55.946 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:04:55 localhost nova_compute[280804]: 2026-02-20 10:04:55.946 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 20 05:04:55 localhost nova_compute[280804]: 2026-02-20 10:04:55.946 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 20 05:04:55 localhost nova_compute[280804]: 2026-02-20 10:04:55.952 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:04:55 localhost nova_compute[280804]: 2026-02-20 10:04:55.952 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 20 05:04:57 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Feb 20 05:04:58 localhost openstack_network_exporter[243776]: ERROR 10:04:58 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 05:04:58 localhost openstack_network_exporter[243776]: Feb 20 05:04:58 localhost openstack_network_exporter[243776]: ERROR 10:04:58 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 05:04:58 localhost openstack_network_exporter[243776]: Feb 20 05:04:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0. Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.309213) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73 Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581898309255, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 946, "num_deletes": 251, "total_data_size": 1028097, "memory_usage": 1046504, "flush_reason": "Manual Compaction"} Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581898317445, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 1012543, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39621, "largest_seqno": 40566, "table_properties": {"data_size": 1008141, "index_size": 2065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9542, "raw_average_key_size": 18, "raw_value_size": 999022, "raw_average_value_size": 1982, "num_data_blocks": 86, "num_entries": 504, "num_filter_entries": 504, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581838, "oldest_key_time": 1771581838, "file_creation_time": 1771581898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}} Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 8281 microseconds, and 3820 cpu microseconds. Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.317493) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 1012543 bytes OK Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.317517) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.319580) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.319600) EVENT_LOG_v1 {"time_micros": 1771581898319594, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.319621) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1023540, prev total WAL file size 1023540, number of live WAL files 2. Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.320356) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353238' seq:72057594037927935, type:22 .. '6B760031373739' seq:0, type:0; will stop at (end) Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00 Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(988KB)], [72(19MB)] Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581898320437, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 21455298, "oldest_snapshot_seqno": -1} Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 14328 keys, 20410216 bytes, temperature: kUnknown Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581898413461, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 20410216, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20325903, "index_size": 47412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35845, "raw_key_size": 384678, "raw_average_key_size": 26, "raw_value_size": 20080060, "raw_average_value_size": 1401, "num_data_blocks": 1768, "num_entries": 14328, "num_filter_entries": 14328, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580572, "oldest_key_time": 0, "file_creation_time": 1771581898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5b0c71e-1a28-4ac7-8b68-08edb74002f2", "db_session_id": "54EDA52XUT1SDV7DF7Y7", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}} Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.413754) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 20410216 bytes Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.415860) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 230.5 rd, 219.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 19.5 +0.0 blob) out(19.5 +0.0 blob), read-write-amplify(41.3) write-amplify(20.2) OK, records in: 14859, records dropped: 531 output_compression: NoCompression Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.415895) EVENT_LOG_v1 {"time_micros": 1771581898415880, "job": 44, "event": "compaction_finished", "compaction_time_micros": 93083, "compaction_time_cpu_micros": 56254, "output_level": 6, "num_output_files": 1, "total_output_size": 20410216, "num_input_records": 14859, "num_output_records": 14328, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581898416288, "job": 44, "event": "table_file_deletion", "file_number": 74} Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625202/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581898420458, "job": 44, "event": "table_file_deletion", "file_number": 72} Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.320245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.420501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.420508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.420513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.420517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:04:58 localhost ceph-mon[292786]: rocksdb: (Original Log Time 2026/02/20-10:04:58.420521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Feb 20 05:04:58 localhost ceph-mgr[286565]: [progress INFO root] Writing back 50 completed events Feb 20 05:04:58 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Feb 20 05:04:59 localhost ceph-mon[292786]: log_channel(audit) log [INF] : from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 05:04:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 05:04:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 05:04:59 localhost systemd[1]: tmp-crun.ymq1Ph.mount: Deactivated successfully. Feb 20 05:04:59 localhost podman[328354]: 2026-02-20 10:04:59.466212444 +0000 UTC m=+0.104850154 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127) Feb 20 05:04:59 localhost ceph-mon[292786]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' Feb 20 05:04:59 localhost podman[328356]: 2026-02-20 10:04:59.547019904 +0000 UTC m=+0.178998566 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Feb 20 05:04:59 localhost podman[328354]: 2026-02-20 10:04:59.573919142 +0000 UTC m=+0.212556892 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Feb 20 05:04:59 localhost podman[328356]: 2026-02-20 10:04:59.580130478 +0000 UTC m=+0.212109180 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Feb 20 05:04:59 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 05:04:59 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 05:04:59 localhost podman[328355]: 2026-02-20 10:04:59.631488002 +0000 UTC m=+0.266070053 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 05:04:59 localhost podman[328355]: 2026-02-20 10:04:59.715719273 +0000 UTC m=+0.350301374 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter) Feb 20 05:04:59 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 05:04:59 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Feb 20 05:05:00 localhost nova_compute[280804]: 2026-02-20 10:05:00.953 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:05:00 localhost nova_compute[280804]: 2026-02-20 10:05:00.956 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:05:01 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Feb 20 05:05:03 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:05:03 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Feb 20 05:05:04 localhost sshd[328423]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:05:05 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Feb 20 05:05:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:05:05.929 161766 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:05:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:05:05.929 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:05:05 localhost ovn_metadata_agent[161761]: 2026-02-20 10:05:05.930 161766 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:05:05 localhost nova_compute[280804]: 2026-02-20 10:05:05.955 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:05:05 localhost nova_compute[280804]: 2026-02-20 10:05:05.960 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:05:06 localhost nova_compute[280804]: 2026-02-20 10:05:06.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:05:06 localhost nova_compute[280804]: 2026-02-20 10:05:06.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Feb 20 05:05:06 localhost nova_compute[280804]: 2026-02-20 10:05:06.512 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Feb 20 05:05:06 localhost nova_compute[280804]: 2026-02-20 10:05:06.529 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Feb 20 05:05:06 localhost nova_compute[280804]: 2026-02-20 10:05:06.529 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:05:07 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s Feb 20 05:05:08 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:05:08 localhost nova_compute[280804]: 2026-02-20 10:05:08.525 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:05:09 localhost nova_compute[280804]: 2026-02-20 10:05:09.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:05:09 localhost nova_compute[280804]: 2026-02-20 10:05:09.511 280808 DEBUG nova.compute.manager [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Feb 20 05:05:09 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s Feb 20 05:05:10 localhost sshd[328425]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:05:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e. Feb 20 05:05:10 localhost podman[328427]: 2026-02-20 10:05:10.460803181 +0000 UTC m=+0.099288835 container health_status 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Feb 20 05:05:10 localhost podman[328427]: 2026-02-20 10:05:10.471759174 +0000 UTC m=+0.110244768 container exec_died 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Feb 20 05:05:10 localhost systemd[1]: 894004a1ff28965dbaa2f8a3165feb9a2c07c63d1af3f65a9793cb65a41c257e.service: Deactivated successfully. Feb 20 05:05:10 localhost nova_compute[280804]: 2026-02-20 10:05:10.510 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:05:10 localhost nova_compute[280804]: 2026-02-20 10:05:10.511 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:05:10 localhost sshd[328450]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:05:10 localhost nova_compute[280804]: 2026-02-20 10:05:10.868 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:05:10 localhost nova_compute[280804]: 2026-02-20 10:05:10.869 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:05:10 localhost nova_compute[280804]: 2026-02-20 10:05:10.869 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:05:10 localhost nova_compute[280804]: 2026-02-20 10:05:10.869 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Auditing locally available compute resources for np0005625202.localdomain (node: np0005625202.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Feb 20 05:05:10 localhost nova_compute[280804]: 2026-02-20 10:05:10.870 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:05:10 localhost nova_compute[280804]: 2026-02-20 10:05:10.961 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:05:10 localhost nova_compute[280804]: 2026-02-20 10:05:10.964 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:05:10 localhost nova_compute[280804]: 2026-02-20 10:05:10.964 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 20 05:05:10 localhost nova_compute[280804]: 2026-02-20 10:05:10.965 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 20 05:05:11 localhost nova_compute[280804]: 2026-02-20 10:05:11.005 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:05:11 localhost nova_compute[280804]: 2026-02-20 10:05:11.007 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 20 05:05:11 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:05:11 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/226919907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:05:11 localhost nova_compute[280804]: 2026-02-20 10:05:11.277 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:05:11 localhost nova_compute[280804]: 2026-02-20 10:05:11.461 280808 WARNING nova.virt.libvirt.driver [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Feb 20 05:05:11 localhost nova_compute[280804]: 2026-02-20 10:05:11.462 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Hypervisor/Node resource view: name=np0005625202.localdomain free_ram=11340MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Feb 20 05:05:11 localhost nova_compute[280804]: 2026-02-20 10:05:11.462 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Feb 20 05:05:11 localhost nova_compute[280804]: 2026-02-20 10:05:11.462 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Feb 20 05:05:11 localhost nova_compute[280804]: 2026-02-20 10:05:11.538 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Feb 20 05:05:11 localhost nova_compute[280804]: 2026-02-20 10:05:11.538 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Final resource view: name=np0005625202.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Feb 20 05:05:11 localhost nova_compute[280804]: 2026-02-20 10:05:11.562 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Feb 20 05:05:11 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s Feb 20 05:05:12 localhost ceph-mon[292786]: mon.np0005625202@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Feb 20 05:05:12 localhost ceph-mon[292786]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1551072278' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Feb 20 05:05:12 localhost nova_compute[280804]: 2026-02-20 10:05:12.044 280808 DEBUG oslo_concurrency.processutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Feb 20 05:05:12 localhost nova_compute[280804]: 2026-02-20 10:05:12.050 280808 DEBUG nova.compute.provider_tree [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Feb 20 05:05:12 localhost nova_compute[280804]: 2026-02-20 10:05:12.075 280808 DEBUG nova.scheduler.client.report [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Inventory has not changed for provider 5c793a27-e3fc-4c12-b35a-04ad5cf1d0c6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Feb 20 05:05:12 localhost nova_compute[280804]: 2026-02-20 10:05:12.077 280808 DEBUG nova.compute.resource_tracker [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Compute_service record updated for np0005625202.localdomain:np0005625202.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Feb 20 05:05:12 localhost nova_compute[280804]: 2026-02-20 10:05:12.078 280808 DEBUG oslo_concurrency.lockutils [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Feb 20 05:05:12 localhost sshd[328496]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:05:13 localhost systemd-logind[760]: New session 76 of user zuul. Feb 20 05:05:13 localhost systemd[1]: Started Session 76 of User zuul. Feb 20 05:05:13 localhost nova_compute[280804]: 2026-02-20 10:05:13.078 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:05:13 localhost nova_compute[280804]: 2026-02-20 10:05:13.079 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:05:13 localhost nova_compute[280804]: 2026-02-20 10:05:13.079 280808 DEBUG oslo_service.periodic_task [None req-9abf9ab4-fefa-4fba-926c-07088b2403a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Feb 20 05:05:13 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:05:13 localhost python3[328518]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-9e25-3a25-00000000000c-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 20 05:05:13 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:05:15 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:05:16 localhost nova_compute[280804]: 2026-02-20 10:05:16.008 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:05:16 localhost nova_compute[280804]: 2026-02-20 10:05:16.010 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:05:16 localhost nova_compute[280804]: 2026-02-20 10:05:16.010 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 20 05:05:16 localhost nova_compute[280804]: 2026-02-20 10:05:16.010 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 20 05:05:16 localhost nova_compute[280804]: 2026-02-20 10:05:16.047 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:05:16 localhost nova_compute[280804]: 2026-02-20 10:05:16.048 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 20 05:05:16 localhost podman[241347]: time="2026-02-20T10:05:16Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Feb 20 05:05:16 localhost podman[241347]: @ - - [20/Feb/2026:10:05:16 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157716 "" "Go-http-client/1.1" Feb 20 05:05:16 localhost podman[241347]: @ - - [20/Feb/2026:10:05:16 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18830 "" "Go-http-client/1.1" Feb 20 05:05:17 localhost ovn_controller[155916]: 2026-02-20T10:05:17Z|00275|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.269 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.270 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.270 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.270 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.271 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.271 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.271 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.271 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.271 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.271 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.272 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.272 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.272 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.272 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.272 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.272 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.272 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.273 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceilometer_agent_compute[236653]: 2026-02-20 10:05:17.274 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Feb 20 05:05:17 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:05:18 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:05:19 localhost systemd[1]: session-76.scope: Deactivated successfully. Feb 20 05:05:19 localhost systemd-logind[760]: Session 76 logged out. Waiting for processes to exit. Feb 20 05:05:19 localhost systemd-logind[760]: Removed session 76. Feb 20 05:05:19 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:05:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c. Feb 20 05:05:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217. Feb 20 05:05:20 localhost podman[328522]: 2026-02-20 10:05:20.455919774 +0000 UTC m=+0.092760441 container health_status 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public) Feb 20 05:05:20 localhost podman[328522]: 2026-02-20 10:05:20.470175324 +0000 UTC m=+0.107015941 container exec_died 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, architecture=x86_64) Feb 20 05:05:20 localhost systemd[1]: 0adceac7bb4268a6c82ffc1ff899a2f4e917d1efd479507e59d1c31652785c0c.service: Deactivated successfully. Feb 20 05:05:20 localhost podman[328523]: 2026-02-20 10:05:20.559307577 +0000 UTC m=+0.192596128 container health_status bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, tcib_managed=true) Feb 20 05:05:20 localhost podman[328523]: 2026-02-20 10:05:20.5747374 +0000 UTC m=+0.208025921 container exec_died bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-11330f00ff48c9a32f1ab028023e2e756ccb1d137500a3aa488bad55743d3009'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Feb 20 05:05:20 localhost systemd[1]: bdd1b9119f41442d29a373caa50cf99ff47a0d92310d9b6a8cf93998e1df0217.service: Deactivated successfully. Feb 20 05:05:21 localhost nova_compute[280804]: 2026-02-20 10:05:21.048 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:05:21 localhost nova_compute[280804]: 2026-02-20 10:05:21.050 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:05:21 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:05:23 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:05:23 localhost ceph-mgr[286565]: [balancer INFO root] Optimize plan auto_2026-02-20_10:05:23 Feb 20 05:05:23 localhost ceph-mgr[286565]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Feb 20 05:05:23 localhost ceph-mgr[286565]: [balancer INFO root] do_upmap Feb 20 05:05:23 localhost ceph-mgr[286565]: [balancer INFO root] pools ['backups', 'volumes', 'vms', '.mgr', 'manila_data', 'manila_metadata', 'images'] Feb 20 05:05:23 localhost ceph-mgr[286565]: [balancer INFO root] prepared 0/10 changes Feb 20 05:05:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:05:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:05:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:05:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:05:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] scanning for idle connections.. Feb 20 05:05:23 localhost ceph-mgr[286565]: [volumes INFO mgr_util] cleaning up connections: [] Feb 20 05:05:23 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] _maybe_adjust Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32) Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Feb 20 05:05:23 localhost ceph-mgr[286565]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0020193742148241207 of space, bias 4.0, pg target 1.607421875 quantized to 16 (current 16) Feb 20 05:05:23 localhost ceph-mgr[286565]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Feb 20 05:05:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 05:05:23 localhost ceph-mgr[286565]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Feb 20 05:05:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: vms, start_after= Feb 20 05:05:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 05:05:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 05:05:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: volumes, start_after= Feb 20 05:05:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: images, start_after= Feb 20 05:05:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 05:05:23 localhost ceph-mgr[286565]: [rbd_support INFO root] load_schedules: backups, start_after= Feb 20 05:05:25 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:05:26 localhost nova_compute[280804]: 2026-02-20 10:05:26.051 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:05:26 localhost nova_compute[280804]: 2026-02-20 10:05:26.053 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:05:27 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:05:28 localhost openstack_network_exporter[243776]: ERROR 10:05:28 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Feb 20 05:05:28 localhost openstack_network_exporter[243776]: Feb 20 05:05:28 localhost openstack_network_exporter[243776]: ERROR 10:05:28 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Feb 20 05:05:28 localhost openstack_network_exporter[243776]: Feb 20 05:05:28 localhost ceph-mon[292786]: mon.np0005625202@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Feb 20 05:05:29 localhost ceph-mgr[286565]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Feb 20 05:05:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383. Feb 20 05:05:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1. Feb 20 05:05:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef. Feb 20 05:05:30 localhost podman[328560]: 2026-02-20 10:05:30.453408271 +0000 UTC m=+0.083855632 container health_status 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Feb 20 05:05:30 localhost podman[328560]: 2026-02-20 10:05:30.462028362 +0000 UTC m=+0.092475723 container exec_died 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Feb 20 05:05:30 localhost systemd[1]: 90d57ea488be90ac7ce79e9193d56809896672738502f1e11016a823e9ca15c1.service: Deactivated successfully. Feb 20 05:05:30 localhost podman[328559]: 2026-02-20 10:05:30.514907775 +0000 UTC m=+0.146797724 container health_status 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Feb 20 05:05:30 localhost podman[328561]: 2026-02-20 10:05:30.570498731 +0000 UTC m=+0.192599359 container health_status eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127) Feb 20 05:05:30 localhost podman[328559]: 2026-02-20 10:05:30.584055674 +0000 UTC m=+0.215945683 container exec_died 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Feb 20 05:05:30 localhost systemd[1]: 76ac14a10a43fbb3b627e0aeb4331e8b815ccec00048002adae9d92e8a118383.service: Deactivated successfully. Feb 20 05:05:30 localhost podman[328561]: 2026-02-20 10:05:30.60788098 +0000 UTC m=+0.229981588 container exec_died eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3) Feb 20 05:05:30 localhost systemd[1]: eb1dc9e24f30d10b89d2e9e29737e7b29c50923976fc85950324739328588fef.service: Deactivated successfully. Feb 20 05:05:31 localhost nova_compute[280804]: 2026-02-20 10:05:31.055 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:05:31 localhost nova_compute[280804]: 2026-02-20 10:05:31.056 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Feb 20 05:05:31 localhost nova_compute[280804]: 2026-02-20 10:05:31.057 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Feb 20 05:05:31 localhost nova_compute[280804]: 2026-02-20 10:05:31.057 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 20 05:05:31 localhost nova_compute[280804]: 2026-02-20 10:05:31.084 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Feb 20 05:05:31 localhost nova_compute[280804]: 2026-02-20 10:05:31.085 280808 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Feb 20 05:05:31 localhost sshd[328622]: main: sshd: ssh-rsa algorithm is disabled Feb 20 05:05:31 localhost systemd-logind[760]: New session 77 of user zuul. Feb 20 05:05:31 localhost systemd[1]: Started Session 77 of User zuul.